Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, September 7, 2017

Are morally good actions ever free?

Cory J. Clark, Adam Shniderman, Jamie Luguri, Roy Baumeister, and Peter Ditto
SSRN Electronic Journal, August 2017

Abstract

A large body of work has demonstrated that people ascribe more responsibility to morally bad actions than both morally good and morally neutral ones, creating the impression that people do not attribute responsibility to morally good actions. The present work demonstrates that this is not so: People attributed more free will to morally good actions than morally neutral ones (Studies 1a-1b). Studies 2a-2b distinguished the underlying motives for ascribing responsibility to morally good and bad actions. Free will ascriptions for morally bad actions were driven predominantly by affective punitive responses. Free will judgments for morally good actions were similarly driven by affective reward responses, but also less affectively-charged and more pragmatic considerations (the perceived utility of reward, normativity of the action, and willpower required to perform the action). Responsibility ascriptions to morally good actions may be more carefully considered, leading to generally weaker, but more contextually-sensitive free will judgments.

The research is here.

Wednesday, September 6, 2017

The importance of building ethics into artificial intelligence

Kriti Sharma
Mashable
Originally published August 18, 2017

Here is an excerpt:

Humans possess inherent social, economic and cultural biases. It’s unfortunately core to social fabrics around the world. Therefore, AI offers a chance for the business community to eliminate such biases from their global operations.

The onus is on the tech community to build technology that utilizes data from relevant, trusted sources to embrace a diversity of culture, knowledge, opinions, skills and interactions.

Indeed, AI operating in the business world today performs repetitive tasks well, learns on the job and even incorporates human social norms into its work. However, AI also spends a significant amount of time scouring the web and its own conversational history for additional context that will inform future interactions with human counterparts.

This prevalence of well-trodden data sets and partial information on the internet presents a challenge and an opportunity for AI developers. When built with responsible business and social practices in mind, AI technology has the potential to consistently – and ethically – deliver products and services to people who need them. And do so without the omnipresent human threat of bias.

Ultimately, we need to create innately diverse AI. As an industry-focused tech community, we must develop effective mechanisms to filter out biases, as well as any negative sentiment in the data that AI learns from to ensure the technology does not perpetuate stereotypes. Unless we build AI using diverse teams, datasets and design, we risk repeating the fundamental inequality of previous industrial revolutions.

The article is here.

The Nuremberg Code 70 Years Later

Jonathan D. Moreno, Ulf Schmidt, and Steve Joffe
JAMA. Published online August 17, 2017.

Seventy years ago, on August 20, 1947, the International Medical Tribunal in Nuremberg, Germany, delivered its verdict in the trial of 23 doctors and bureaucrats accused of war crimes and crimes against humanity for their roles in cruel and often lethal concentration camp medical experiments. As part of its judgment, the court articulated a 10-point set of rules for the conduct of human experiments that has come to be known as the Nuremberg Code. Among other requirements, the code called for the “voluntary consent” of the human research subject, an assessment of risks and benefits, and assurances of competent investigators. These concepts have become an important reference point for the ethical conduct of medical research. Yet, there has in the past been considerable debate among scholars about the code’s authorship, scope, and legal standing in both civilian and military science. Nonetheless, the Nuremberg Code has undoubtedly been a milestone in the history of biomedical research ethics.1- 3

Writings on medical ethics, laws, and regulations in a number of jurisdictions and countries, including a detailed and sophisticated set of guidelines from the Reich Ministry of the Interior in 1931, set the stage for the code. The same focus on voluntariness and risk that characterizes the code also suffuses these guidelines. What distinguishes the code is its context. As lead prosecutor Telford Taylor emphasized, although the Doctors’ Trial was at its heart a murder trial, it clearly implicated the ethical practices of medical experimenters and, by extension, the medical profession’s relationship to the state understood as an organized community living under a particular political structure. The embrace of Nazi ideology by German physicians, and the subsequent participation of some of their most distinguished leaders in the camp experiments, demonstrates the importance of professional independence from and resistance to the ideological and geopolitical ambitions of the authoritarian state.

The article is here.

Tuesday, September 5, 2017

Ethical behaviour of physicians and psychologists: similarities and differences

Ferencz Kaddari M, Koslowsky M, Weingarten MA
Journal of Medical Ethics Published Online First: 18 August 2017.

Abstract

Objective 

To compare the coping patterns of physicians and clinical psychologists when confronted with clinical ethical dilemmas and to explore consistency across different dilemmas.

Population 88 clinical psychologists and 149 family physicians in Israel.

Method 

Six dilemmas representing different ethical domains were selected from the literature. Vignettes were composed for each dilemma, and seven possible behavioural responses for each were proposed, scaled from most to least ethical. The vignettes were presented to both family physicians and clinical psychologists.

Results 

Psychologists’ aggregated mean ethical intention score, as compared with the physicians, was found to be significantly higher (F(6, 232)=22.44, p<0.001, η2=0.37). Psychologists showed higher ethical intent for two dilemmas: issues of payment (they would continue treating a non-paying patient while physicians would not) and dual relationships (they would avoid treating the son of a colleague). In the other four vignettes, psychologists and physicians responded in much the same way. The highest ethical intent scores for both psychologists and physicians were for confidentiality and a colleague's inappropriate practice due to personal problems.

Conclusions 

Responses to the dilemmas by physicians and psychologists can be categorised into two groups: (1) similar behaviours on the part of both professions when confronting dilemmas concerning confidentiality, inappropriate practice due to personal problems, improper professional conduct and academic issues and (2) different behaviours when confronting either payment issues or dual relationships.

The research is here.

Monday, September 4, 2017

Teaching A.I. Systems to Behave Themselves

Cade Metz
The New York Times
Originally published August 13, 2017

Here is an excerpt:

Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.

The article is here.

Sunday, September 3, 2017

The bold new fight to eradicate suicide

Simon Usborne
The Guardian
Originally published August 1, 2017

Here is an excerpt:

They call it “Zero Suicide”, a bold ambition and slogan that emerged from a Detroit hospital more than a decade ago, and which is now being incorporated into several NHS trusts. Since our first meeting, Steve has himself embraced the idea, and in May of this year held talks with Mersey Care, one of the specialist mental health trusts already applying a zero strategy. His plans are at an early stage, but he is setting out to create a Zero Suicide foundation. He wants it to identify good practices across the 55 mental health trusts in England and create a new strategy to be applied everywhere.

The zero approach is a proactive strategy that aims to identify and care for all those who may be at risk of suicide, rather than reacting once patients have reached crisis point. It emphasises strong leadership, improved training, better patient-screening and the use of the latest data and research to make changes without fear or delay. It is a joined-up strategy that challenges old ideas about the inevitability of suicide, the stigma that surrounds it, and the idea that if a reduction target is achieved, the deaths on the way to it are somehow acceptable. “Even if you believe we are never going to eradicate suicide, we must strive towards that,” Steve said to me. “If zero isn’t the right target, then what is?”

Zero Suicide is not radical, incorporating as it does several existing prevention strategies. But that it should be seen as new and daringly ambitious reveals much about how slowly attitudes have changed. In the 1957 book The Uses of Literacy: Aspects of Working-Class Life, a semi-autobiographical examination of the cultural upheavals of the 1950s, Richard Hoggart recalled his upbringing in Leeds. “Every so often one heard that so-and-so had ‘done ’erself in’ … or ‘put ’er ’ead in the gas-oven’,” he wrote. “It did not happen monthly or even every season, and not all attempts succeeded; but it happened sufficiently often to be part of the pattern of life.” He wondered how “suicide could be accepted – pitifully but with little suggestion of blame – as part of the order of existence”.

The article is here.

Friday, September 1, 2017

A Plutocratic Proposal: an ethical way for rich patients to pay for a place on a clinical trial

Alexander Masters and Dominic Nutt
Journal of Medical Ethics 
Published Online First: 06 June 2017.

Abstract

Many potential therapeutic agents are discarded before they are tested in humans. These are not quack medications. They are drugs and other interventions that have been developed by responsible scientists in respectable companies or universities and are often backed up by publications in peer-reviewed journals. These possible treatments might ease suffering and prolong the lives of innumerable patients, yet they have been put aside. In this paper, we outline a novel mechanism—the Plutocratic Proposal—to revive such neglected research and fund early phase clinical trials. The central idea of the Proposal is that any patient who rescues a potential therapeutic agent from neglect by funding early phase clinical trials (either entirely or in large part) should be offered a place on the trial.

The article is here.

Political differences in free will belief are driven by differences in moralization

Clark, C. J., Everett, J. A. C., Luguri, J. B., Earp, B. D., Ditto, P., & Shariff, A.
PsyArXiv. (2017, August 1).

Abstract

Five studies tested whether political conservatives’ stronger free will beliefs are driven by their broader view of morality, and thus a broader motivation to assign responsibility. On an individual difference level, Study 1 found that political conservatives’ higher moral wrongness judgments accounted for their higher belief in free will.In Study 2, conservatives ascribed more free will for negative events than liberals,while no differences emerged for positive events. For actions ideologically equivalent in perceived moral wrongness, free will judgments also did not differ (Study 3), and actions that liberals perceived as more wrong, liberals judged as more free(Study 4). Finally, higher wrongness judgments mediated the effect of conservatism on free will beliefs(Study 5). Higher free will beliefs among conservatives may be explained by conservatives’ tendency to moralize, which strengthens motivation to justify blame with stronger belief in free will and personal accountability.

The preprint research article is here.