Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, May 18, 2017

The secret to honesty revealed: it feels better

Henry Bodkin
The Telegraph
Originally published May 1, 2017

It is a mystery that has perplexed psychologists and philosophers since the dawn of humanity: why are most people honest?

Now, using a complex array of MRI machines and electrocution devices, scientists claim to have found the answer.

(cut)

“Our findings suggest the brain internalizes the moral judgments of others, simulating how much others might blame us for potential wrongdoing, even when we know our actions are anonymous,” said Dr Crockett.

The scans also revealed that an area of the brain involved in making moral judgments, the lateral prefrontal cortex, was most active in trials where inflicting pain yielded minimal profit.

The article is here.

Morality constrains the default representation of what is possible

Phillips J; Cushman F
Proc Natl Acad Sci U S A.  2017;  (ISSN: 1091-6490)

The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

The paper is here.

Wednesday, May 17, 2017

Moral conformity in online interactions

Meagan Kelly, Lawrence Ngo, Vladimir Chituc, Scott Huettel, and Walter Sinnott-Armstrong
Social Influence 

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

The article is here.

Where did Nazi doctors learn their ethics? From a textbook

Michael Cook
BioEdge.org
Originally posted April 29, 2017

German medicine under Hitler resulted in so many horrors – eugenics, human experimentation, forced sterilization, involuntary euthanasia, mass murder – that there is a temptation to say that “Nazi doctors had no ethics”.

However, according to an article in the Annals of Internal Medicine by Florian Bruns and Tessa Chelouche (from Germany and Israel respectively), this was not the case at all. In fact, medical ethics was an important part of the medical curriculum between 1939 and 1945. Nazi officials established lectureships in every medical school in Germany for a subject called “Medical Law and Professional Studies” (MLPS).

There was no lack of ethics. It was just the wrong kind of ethics.

(cut)

It is important to realize that ethical reasoning can be corrupted and that teaching ethics is, in itself, no guarantee of the moral integrity of physicians.

The article is here.

Tuesday, May 16, 2017

Talking in Euphemisms Can Chip Away at Your Sense of Morality

Laura Niemi, Alek Chakroff, and Liane Young
The Science of Us
Originally published April 7, 2017

Here is an excerpt:

Taken together, the results suggest that unethical behavior becomes easier when we perceive our own actions in indirect terms, which makes things that we would otherwise balk at seem a bit more palatable. In other words, deploying indirect speech doesn’t just help us evade blame from others — it also helps us to convince ourselves that unethical acts aren’t so bad after all.

That’s not to say that this is a conscious process. A speaker who shrouds his harmful intentions in indirect speech may understand that this will help him hold on to his standing in the public eye, or maintain his reputation among those closest to him — a useful tactic when those intentions are likely to be condemned or fall outside the bounds of socially acceptable behavior. But that same speaker may be unaware of just how much their indirect speech is easing their own psyche, too.

The article is here.

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Monday, May 15, 2017

Overcoming patient reluctance to be involved in medical decision making

J.S. Blumenthal-Barby
Patient Education and Counseling
January 2017, Volume 100, Issue 1, Pages 14–17

Abstract

Objective

To review the barriers to patient engagement and techniques to increase patients’ engagement in their medical decision-making and care.

Discussion

Barriers exist to patient involvement in their decision-making and care. Individual barriers include education, language, and culture/attitudes (e.g., deference to physicians). Contextual barriers include time (lack of) and timing (e.g., lag between test results being available and patient encounter). Clinicians should gauge patients’ interest in being involved and their level of current knowledge about their condition and options. Framing information in multiple ways and modalities can enhance understanding, which can empower patients to become more engaged. Tools such as decision aids or audio recording of conversations can help patients remember important information, a requirement for meaningful engagement. Clinicians and researchers should work to create social norms and prompts around patients asking questions and expressing their values. Telehealth and electronic platforms are promising modalities for allowing patients to ask questions on in a non-intimidating atmosphere.

Conclusion

Researchers and clinicians should be motivated to find ways to engage patients on the ethical imperative that many patients prefer to be more engaged in some way, shape, or form; patients have better experiences when they are engaged, and engagement improves health outcomes.

The article is here.

Cassandra’s Regret: The Psychology of Not Wanting to Know

Gigerenzer, Gerd; Garcia-Retamero, Rocio
Psychological Review, Vol 124(2), Mar 2017, 179-196.

Abstract

Ignorance is generally pictured as an unwanted state of mind, and the act of willful ignorance may raise eyebrows. Yet people do not always want to know, demonstrating a lack of curiosity at odds with theories postulating a general need for certainty, ambiguity aversion, or the Bayesian principle of total evidence. We propose a regret theory of deliberate ignorance that covers both negative feelings that may arise from foreknowledge of negative events, such as death and divorce, and positive feelings of surprise and suspense that may arise from foreknowledge of positive events, such as knowing the sex of an unborn child. We conduct the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events. Its prevalence is high: Between 85% and 90% of people would not want to know about upcoming negative events, and 40% to 70% prefer to remain ignorant of positive events. Only 1% of participants consistently wanted to know. We also deduce and test several predictions from the regret theory: Individuals who prefer to remain ignorant are more risk averse and more frequently buy life and legal insurance. The theory also implies the time-to-event hypothesis, which states that for the regret-prone, deliberate ignorance is more likely the nearer the event approaches. We cross-validate these findings using 2 representative national quota samples in 2 European countries. In sum, we show that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret.



The article is here.

Sunday, May 14, 2017

The power thinker

Colin Koopman
Originally posted March 15, 2017

Here is an excerpt:

Foucault’s work shows that disciplinary power was just one of many forms that power has come to take over the past few hundred years. Disciplinary anatomo-politics persists alongside sovereign power as well as the power of bio-politics. In his next book, The History of Sexuality, Foucault argued that bio-politics helps us to understand how garish sexual exuberance persists in a culture that regularly tells itself that its true sexuality is being repressed. Bio-power does not forbid sexuality, but rather regulates it in the maximal interests of very particular conceptions of reproduction, family and health. It was a bio-power wielded by psychiatrists and doctors that, in the 19th century, turned homosexuality into a ‘perversion’ because of its failure to focus sexual activity around the healthy reproductive family. It would have been unlikely, if not impossible, to achieve this by sovereign acts of direct physical coercion. Much more effective were the armies of medical men who helped to straighten out their patients for their own supposed self-interest.

Other forms of power also persist in our midst. Some regard the power of data – that is the info-power of social media, data analytics and ceaseless algorithmic assessment – as the most significant kind of power that has emerged since Foucault’s death in 1984.

The article is here.