Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Forbes.com
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.

Should doctors cry at work?

Fran Robinson
BMJ 2019;364:l690

Many doctors admit to crying at work, whether openly empathising with a patient or on their own behind closed doors. Common reasons for crying are compassion for a dying patient, identifying with a patient’s situation, or feeling overwhelmed by stress and emotion.

Probably still more doctors have done so but been unwilling to admit it for fear that it could be considered unprofessional—a sign of weakness, lack of control, or incompetence. However, it’s increasingly recognised as unhealthy for doctors to bottle up their emotions.

Unexpected tragic events
Psychiatry is a specialty in which doctors might view crying as acceptable, says Annabel Price, visiting researcher at the Department of Psychiatry, University of Cambridge, and a consultant in liaison psychiatry for older adults.

Having discussed the issue with colleagues before being interviewed for this article, she says that none of them would think less of a colleague for crying at work: “There are very few doctors who haven’t felt like crying at work now and again.”

A situation that may move psychiatrists to tears is finding that a patient they’ve been closely involved with has died by suicide. “This is often an unexpected tragic event: it’s very human to become upset, and sometimes it’s hard not to cry when you hear difficult news,” says Price.

The info is here.

Monday, March 25, 2019

U.S. companies put record number of robots to work in 2018

Reuters
Originally published February 28, 2019


U.S. companies installed more robots last year than ever before, as cheaper and more flexible machines put them within reach of businesses of all sizes and in more corners of the economy beyond their traditional foothold in car plants.

Shipments hit 28,478, nearly 16 percent more than in 2017, according to data seen by Reuters that was set for release on Thursday by the Association for Advancing Automation, an industry group based in Ann Arbor, Michigan.

Shipments increased in every sector the group tracks, except automotive, where carmakers cut back after finishing a major round of tooling up for new truck models.

The info is here.

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

Alex John London
The Hastings Center Report
Volume49, Issue1, January/February 2019, Pages 15-21

Abstract

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

The info is here.

Sunday, March 24, 2019

An Ethical Obligation for Bioethicists to Utilize Social Media

Herron, PD
Hastings Cent Rep. 2019 Jan;49(1):39-40.
doi: 10.1002/hast.978.

Here is an excerpt:

Unfortunately, it appears that bioethicists are no better informed than other health professionals, policy experts, or (even) elected officials, and they are sometimes resistant to becoming informed. But bioethicists have a duty to develop our knowledge and usefulness with respect to social media; many of our skills can and should be adapted to this area. There is growing evidence of the power of social media to foster dissemination of misinformation. The harms associated with misinformation or “fake news” are not new threats. Historically, there have always been individuals or organized efforts to propagate false information or to deceive others. Social media and other technologies have provided the ability to rapidly and expansively share both information and misinformation. Bioethics serves society by offering guidance about ethical issues associated with advances in medicine, science, and technology. Much of the public’s conversation about and exposure to these emerging issues occurs online. If we bioethicists are not part of the mix, we risk yielding to alternative and less authoritative sources of information. Social media’s transformative impact has led some to view it as not just a personal tool but the equivalent to a public utility, which, as such, should be publicly regulated. Bioethicists can also play a significant part in this dialogue. But to do so, we need to engage with social media. We need to ensure that our understanding of social media is based on experiential use, not just abstract theory.

Bioethics has expanded over the past few decades, extending beyond the academy to include, for example, clinical ethics consultants and leadership positions in public affairs and public health policy. These varied roles bring weighty responsibilities and impose a need for critical reflection on how bioethicists can best serve the public interest in a way that reflects and is accountable to the public’s needs.

Saturday, March 23, 2019

The Fake Sex Doctor Who Conned the Media Into Publicizing His Bizarre Research on Suicide, Butt-Fisting, and Bestiality

Jennings Brown
www.gizmodo.com
Originally published March 1, 2019

Here is an excerpt:

Despite Sendler’s claims that he is a doctor, and despite the stethoscope in his headshot, he is not a licensed doctor of medicine in the U.S. Two employees of the Harvard Medical School registrar confirmed to me that Sendler was never enrolled and never received a MD from the medical school. A Harvard spokesperson told me Sendler never received a PhD or any degree from Harvard University.

“I got into Harvard Medical School for MD, PhD, and Masters degree combined,” Sendler told me. I asked if he was able to get a PhD in sexual behavior from Harvard Medical School (Harvard Medical School does not provide any sexual health focuses) and he said “Yes. Yes,” without hesitation, then doubled-down: “I assume that there’s still some kind of sense of wonder on campus [about me]. Because I can see it when I go and visit [Harvard], that people are like, ‘Wow you had the balls, because no one else did that,’” presumably referring to his academic path.

Sendler told me one of his mentors when he was at Harvard Medical School was Yi Zhang, a professor of genetics at the school. Sendler said Zhang didn’t believe in him when he was studying at Harvard. But, Sendler said, he met with Zhang in Boston just a month prior to our interview. And Zhang was now impressed by Sendler’s accomplishments.

Sendler said Zhang told him in January, “Congrats. You did what you felt was right... Turns out, wow, you have way more power in research now than I do. And I’m just very proud of you, because I have people that I really put a lot of effort, after you left, into making them the best and they didn’t turn out that well.”

The info is here.

This is a fairly bizarre story and worth the long read.

Friday, March 22, 2019

We need to talk about systematic fraud

Jennifer Byrne
Nature 566, 9 (2019)
doi: 10.1038/d41586-019-00439-9

Here is an excerpt:

Some might argue that my efforts are inconsequential, and that the publication of potentially fraudulent papers in low-impact journals doesn’t matter. In my view, we can’t afford to accept this argument. Such papers claim to uncover mechanisms behind a swathe of cancers and rare diseases. They could derail efforts to identify easily measurable biomarkers for use in predicting disease outcomes or whether a drug will work. Anyone trying to build on any aspect of this sort of work would be wasting time, specimens and grant money. Yet, when I have raised the issue, I have had comments such as “ah yes, you’re working on that fraud business”, almost as a way of closing down discussion. Occasionally, people’s reactions suggest that ferreting out problems in the literature is a frivolous activity, done for personal amusement, or that it is vindictive, pursued to bring down papers and their authors.

Why is there such enthusiasm for talking about faulty research practices, yet such reluctance to discuss deliberate deception? An analysis of the Diederik Stapel fraud case that rocked the psychology community in 2011 has given me some ideas (W. Stroebe et al. Perspect. Psychol. Sci. 7, 670–688; 2012). Fraud departs from community norms, so scientists do not want to think about it, let alone talk about it. It is even more uncomfortable to think about organized fraud that is so frequently associated with one country. This becomes a vicious cycle: because fraud is not discussed, people don’t learn about it, so they don’t consider it, or they think it’s so rare that it’s unlikely to affect them, and so papers are less likely to come under scrutiny. Thinking and talking about systematic fraud is essential to solving this problem. Raising awareness and the risk of detection may well prompt new ways to identify papers produced by systematic fraud.

Last year, China announced sweeping plans to curb research misconduct. That’s a great first step. Next should be a review of publication quotas and cash rewards, and the closure of ‘paper factories’.

The info is here.

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Thursday, March 21, 2019

Anger as a moral emotion: A 'bird's eye view' systematic review

Tim Lomas
Counseling Psychology Quarterly


Anger is common problem for which counseling/psychotherapy clients seek help, and is typically regarded as an invidious negative emotion to be ameliorated. However, it may be possible to reframe anger as a moral emotion, arising in response to perceived transgressions, thereby endowing it with meaning. In that respect, the current paper offers a ‘bird’s eye’ systematic review of empirical research on anger as a moral emotion (i.e., one focusing broadly on the terrain as a whole, rather than on specific areas). Three databases were reviewed from the start of their records to January 2019. Eligibility criteria included empirical research, published in English in peer-reviewed journals, on anger specifically as a moral emotion. 175 papers met the criteria, and fell into four broad classes of study: survey-based; experimental; physiological; and qualitative. In reviewing the articles, this paper pays particular attention to: how/whether anger can be differentiated from other moral emotions; antecedent causes and triggers; contextual factors that influence or mitigate anger; and outcomes arising from moral anger. Together, the paper offers a comprehensive overview of current knowledge into this prominent and problematic emotion. The results may be of use to counsellors and psychotherapists helping to address anger issues in their clients.

Download the paper here.

Note: Other "symptoms" in mental health can also be reframed as moral issues.  PTSD is similar to Moral Injury.  OCD is highly correlated with scrupulosity, excessive concern about moral purity.  Unhealthy guilt is found in many depressed individuals.  And, psychologists used forgiveness of self and others as a goal in treatment.