Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, May 20, 2017

Conflict of Interest and the Integrity of the Medical Profession

Allen S. Lichter
JAMA. 2017;317(17):1725-1726.

Physicians have a moral responsibility to patients; they are trusted to place the needs and interests of patients ahead of their own, free of unwarranted outside influences on their decisions. Those who have relationships that might be seen to influence their decisions and behaviors that may affect fulfilling their responsibilities to patients must be fully transparent about them. Two types of interactions and activities involving physicians are most relevant: (1) commercial or research relationships between a physician expert and a health care company designed to advance an idea or promote a product, and (2) various gifts, sponsored meals, and educational offerings that come directly or indirectly to physicians from these companies.

Whether these and other ties to industry are important is not a new issue for medicine. Considerations regarding the potential influence of commercial ties date back at least to the 1950s and 1960s. In 1991, Relman reminded physicians that they have “a unique opportunity to assume personal responsibility for important decisions that are not influenced by or subordinated to the purposes of third parties.” However, examples of potential subordination are easily found. There are reports of physicians who are paid handsomely to promote a drug or device, essentially serving as a company spokesperson; of investigators who have ownership in the company that stands to gain if the clinical trial is successful; and of clinical guideline panels that are dominated by experts with financial ties to companies whose products are relevant to the disease or condition at hand.

The article is here.

Friday, May 19, 2017

Conflict of Interest: Why Does It Matter?

Harvey V. Fineberg
JAMA. 2017;317(17):1717-1718.

Preservation of trust is the essential purpose of policies about conflict of interest. Physicians have many important roles including caring for individual patients, protecting the public’s health, engaging in research, reporting scientific and clinical discoveries, crafting professional guidelines, and advising policy makers and regulatory bodies. Success in all these functions depends on others—laypersons, professional peers, and policy leaders—believing and acting on the word of physicians. Therefore, the confidence of others in physician judgment is of paramount importance. When trust in physician judgment is impaired, the role of physicians is diminished.

Physicians should make informed, disinterested judgments. To be disinterested means being free of personal advantage. The type of advantage that is typically of concern in most situations involving physicians is financial. When referring to conflict of interest, the term generally means a financial interest that relates to the issue at hand. More specifically, a conflict of interest can be discerned by using a reasonable person standard; ie, a conflict of interest exists when a reasonable person would interpret the financial circumstances pertaining to a situation as potentially sufficient to influence the judgment of the physician in question.

The article is here.

Moral transgressions corrupt neural representations of value

Molly J Crockett, J. Siegel, Z. Kurth-Nelson, P. Dayan & R. Dolan
Nature Neuroscience

Abstract

Moral systems universally prohibit harming others for personal gain. However, we know little about how such principles guide moral behavior. Using a task that assesses the financial cost participants ascribe to harming others versus themselves, we probed the relationship between moral behavior and neural representations of profit and pain. Most participants displayed moral preferences, placing a higher cost on harming others than themselves. Moral preferences correlated with neural responses to profit, where participants with stronger moral preferences had lower dorsal striatal responses to profit gained from harming others. Lateral prefrontal cortex encoded profit gained from harming others, but not self, and tracked the blameworthiness of harmful choices. Moral decisions also modulated functional connectivity between lateral prefrontal cortex and the profit-sensitive region of dorsal striatum. The findings suggest moral behavior in our task is linked to a neural devaluation of reward realized by a prefrontal modulation of striatal value representations.

The article is here.

Thursday, May 18, 2017

The secret to honesty revealed: it feels better

Henry Bodkin
The Telegraph
Originally published May 1, 2017

It is a mystery that has perplexed psychologists and philosophers since the dawn of humanity: why are most people honest?

Now, using a complex array of MRI machines and electrocution devices, scientists claim to have found the answer.

(cut)

“Our findings suggest the brain internalizes the moral judgments of others, simulating how much others might blame us for potential wrongdoing, even when we know our actions are anonymous,” said Dr Crockett.

The scans also revealed that an area of the brain involved in making moral judgments, the lateral prefrontal cortex, was most active in trials where inflicting pain yielded minimal profit.

The article is here.

Morality constrains the default representation of what is possible

Phillips J; Cushman F
Proc Natl Acad Sci U S A.  2017;  (ISSN: 1091-6490)

The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

The paper is here.

Wednesday, May 17, 2017

Moral conformity in online interactions

Meagan Kelly, Lawrence Ngo, Vladimir Chituc, Scott Huettel, and Walter Sinnott-Armstrong
Social Influence 

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

The article is here.

Where did Nazi doctors learn their ethics? From a textbook

Michael Cook
BioEdge.org
Originally posted April 29, 2017

German medicine under Hitler resulted in so many horrors – eugenics, human experimentation, forced sterilization, involuntary euthanasia, mass murder – that there is a temptation to say that “Nazi doctors had no ethics”.

However, according to an article in the Annals of Internal Medicine by Florian Bruns and Tessa Chelouche (from Germany and Israel respectively), this was not the case at all. In fact, medical ethics was an important part of the medical curriculum between 1939 and 1945. Nazi officials established lectureships in every medical school in Germany for a subject called “Medical Law and Professional Studies” (MLPS).

There was no lack of ethics. It was just the wrong kind of ethics.

(cut)

It is important to realize that ethical reasoning can be corrupted and that teaching ethics is, in itself, no guarantee of the moral integrity of physicians.

The article is here.

Tuesday, May 16, 2017

Talking in Euphemisms Can Chip Away at Your Sense of Morality

Laura Niemi, Alek Chakroff, and Liane Young
The Science of Us
Originally published April 7, 2017

Here is an excerpt:

Taken together, the results suggest that unethical behavior becomes easier when we perceive our own actions in indirect terms, which makes things that we would otherwise balk at seem a bit more palatable. In other words, deploying indirect speech doesn’t just help us evade blame from others — it also helps us to convince ourselves that unethical acts aren’t so bad after all.

That’s not to say that this is a conscious process. A speaker who shrouds his harmful intentions in indirect speech may understand that this will help him hold on to his standing in the public eye, or maintain his reputation among those closest to him — a useful tactic when those intentions are likely to be condemned or fall outside the bounds of socially acceptable behavior. But that same speaker may be unaware of just how much their indirect speech is easing their own psyche, too.

The article is here.

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.