Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, March 20, 2019

Israel Approves Compassionate Use of MDMA to Treat PTSD

Ido Efrati
www.haaretz.com
Originally posted February 10, 2019

MDMA, popularly known as ecstasy, is a drug more commonly associated with raves and nightclubs than a therapist’s office.

Emerging research has shown promising results in using this “party drug” to treat patients suffering from post-traumatic stress disorder, and Israel’s Health Ministry has just approved the use of MDMA to treat dozens of patients.

MDMA is classified in Israel as a “dangerous drug”, recreational use is illegal, and therapeutic use of MDMA has yet to be formally approved and is still in clinical trials.

However, this treatment is deemed as “compassionate use,” which allows drugs that are still in development to be made available to patients outside of a clinical trial due to the lack of effective alternatives.

The info is here.

Should This Exist? The Ethics Of New Technology

Lulu Garcia-Navarro
www.NPR.org
Originally posted March 3, 2019

Not every new technology product hits the shelves.

Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.

Or maybe, it might be too dangerous.

Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.

The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.
Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.

The info is here.

Tuesday, March 19, 2019

Treasury Secretary Steven Mnuchin's Hollywood ties spark ethics questions in China trade talks

Emma Newburger
CNBC.com
Originally posted March 15, 2019

Treasury Secretary Steven Mnuchin, one of President Donald Trump's key negotiators in the U.S.-China trade talks, has pushed Beijing to grant the American film industry greater access to its markets.

But now, Mnuchin’s ties to Hollywood are raising ethical questions about his role in those negotiations. Mnuchin had been a producer in a raft of successful films prior to joining the Trump administration.

In 2017, he divested his stake in a film production company after joining the White House. But he sold that position to his wife, filmmaker and actress Louise Linton, for between $1 million and $2 million, The New York Times reported on Thursday. At the time, she was his fiancée.

That company, StormChaser Partners, helped produce the mega-hit movie “Wonder Woman,” which grossed $90 million in China, according to the Times. Yet, because of China’s restrictions on foreign films, the producers received a small portion of that money. Mnuchin has been personally engaged in trying to ease those rules, which could be a boon to the industry, according to the Times.

The info is here.

We're Teaching Consent All Wrong

Sarah Sparks
www.edweek.org
Originally published January 8, 2019

Here is an excerpt:

Instead, researchers and educators offer an alternative: Teach consent as a life skill—not just a sex skill—beginning in early childhood, and begin discussing consent and communication in the context of relationships by 5th or 6th grades, before kids start seriously thinking about sex. (Think that's too young? In yet another study, the CDC found 8 in 10 teenagers didn't get sex education until after they'd already had sex.)

Educators and parents often balk at discussing strategies for and examples of consent because "they incorrectly believe that if you teach consent, students will become more sexually active," said Mike Domitrz, founder of the Date Safe Project, a Milwaukee-based sexual-assault prevention program that focuses on consent education and bystander interventions. "It's a myth. Students of both genders are pretty consistent that a lot of the sexual activity that is going on is occurring under pressure."

Studies suggest young women are more likely to judge consent on verbal communication and young men relied more on nonverbal cues, though both groups said nonverbal signals are often misinterpreted. And teenagers can be particularly bad at making decisions about risky behavior, including sexual situations, while under social pressure. Brain studies have found adolescents are more likely to take risks and less likely to think about negative consequences when they are in emotionally arousing, or "hot," situations, and that bad decision-making tends to get even worse when they feel they are being judged by their friends.

Making understanding and negotiating consent a life skill gives children and adolescents ways to understand and respect both their own desires and those of other people. And it can help educators frame instruction about consent without sinking into the morass of long-running arguments and anxiety over gender roles, cultural values, and teen sexuality.

The info is here.

Monday, March 18, 2019

The college admissions scandal is a morality play

Elaine Ayala
San Antonio Express-News
Originally posted March 16, 2019

The college admission cheating scandal that raced through social media and dominated news cycles this week wasn’t exactly shocking: Wealthy parents rigged the system for their underachieving children.

It’s an ancient morality play set at elite universities with an unseemly cast of characters: spoiled teens and shameless parents; corrupt test proctors and paid test takers; as well as college sports officials willing to be bribed and a ring leader who ultimately turned on all of them.

William “Rick” Singer, who went to college in San Antonio, wore a wire to cooperate with FBI investigators.

(cut)

Yet even though they were arrested, the 50 people involved managed to secure the best possible outcome under the circumstances. Unlike many caught shoplifting or possessing small amounts of marijuana and who lack the lawyers and resources to help them navigate the legal system, the accused parents and coaches quickly posted bond and were promptly released without spending much time in custody.

The info is here.

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Sunday, March 17, 2019

Actions Speak Louder Than Outcomes in Judgments of Prosocial Behavior

Daniel A. Yudkin, Annayah M. B. Prosser, and Molly J. Crockett
Emotion (2018).

Recently proposed models of moral cognition suggest that people's judgments of harmful acts are influenced by their consideration both of those acts' consequences ("outcome value"), and of the feeling associated with their enactment ("action value"). Here we apply this framework to judgments of prosocial behavior, suggesting that people's judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Here is an excerpt:

On a broader level, past work has suggested that judging the wrongness of harmful actions involves a process of “evaluative simulation,” whereby we evaluate the moral status of another’s action by simulating the affective response that we would experience performing the action ourselves (Miller et al., 2014). Our results are consistent with the possibility that evaluative simulation also plays a role in judging the praiseworthiness of helpful actions.  If people evaluate helpful actions by simulating what it feels like to perform the action, then we would expect to see similar biases in moral evaluation as those that exist for moral action. Previous work has shown that individuals often do not act to maximize the benefits that others receive, but instead to maximize the good feelings associated with performing good deeds (Berman et al., 2018; Gesiarz & Crockett, 2015; Ribar & Wilhelm, 2002). Thus, the asymmetry in moral evaluation seen in the present studies may reflect a correspondence between first-person moral decision-making and third-person moral evaluation.

Download the pdf here.

Saturday, March 16, 2019

How Should AI Be Developed, Validated, and Implemented in Patient Care?

Michael Anderson and Susan Leigh Anderson
AMA J Ethics. 2019;21(2):E125-130.
doi: 10.1001/amajethics.2019.125.

Abstract

Should an artificial intelligence (AI) program that appears to have a better success rate than human pathologists be used to replace or augment humans in detecting cancer cells? We argue that some concerns—the “black-box” problem (ie, the unknowability of how output is derived from input) and automation bias (overreliance on clinical decision support systems)—are not significant from a patient’s perspective but that expertise in AI is required to properly evaluate test results.

Here is an excerpt:

Automation bias. Automation bias refers generally to a kind of complacency that sets in when a job once done by a health care professional is transferred to an AI program. We see nothing ethically or clinically wrong with automation, if the program achieves a virtually 100% success rate. If, however, the success rate is lower than that—92%, as in the case presented—it’s important that we have assurances that the program has quality input; in this case, that probably means that the AI program “learned” from a cross section of female patients of diverse ages and races. With diversity of input secured, what matters most, ethically and clinically, is that that the AI program has a higher cancer cell-detection success rate than human pathologists.

Friday, March 15, 2019

Ethical considerations on the complicity of psychologists and scientists in torture

Evans NG, Sisti DA, Moreno JD
Journal of the Royal Army Medical Corps 
Published Online First: 20 February 2019.
doi: 10.1136/jramc-2018-001008

Abstract

Introduction 
The long-standing debate on medical complicity in torture has overlooked the complicity of cognitive scientists—psychologists, psychiatrists and neuroscientists—in the practice of torture as a distinct phenomenon. In this paper, we identify the risk of the re-emergence of torture as a practice in the USA, and the complicity of cognitive scientists in these practices.

Methods 
We review arguments for physician complicity in torture. We argue that these defences fail to defend the complicity of cognitive scientists. We address objections to our account, and then provide recommendations for professional associations in resisting complicity in torture.

Results 
Arguments for cognitive scientist complicity in torture fail when those actions stem from the same reasons as physician complicity. Cognitive scientist involvement in the torture programme has, from the outset, been focused on the outcomes of interrogation rather than supportive care. Any possibility of a therapeutic relationship between cognitive therapists and detainees is fatally undermined by therapists’ complicity with torture.

Conclusion 
Professional associations ought to strengthen their commitment to refraining from engaging in any aspect of torture. They should also move to protect whistle-blowers against torture programmes who are members of their association. If the political institutions that are supposed to prevent the practice of torture are not strengthened, cognitive scientists should take collective action to compel intelligence agencies to refrain from torture.