Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, November 11, 2019

Incidental emotions in moral dilemmas: the influence of emotion regulation.

Raluca D. Szekely & Andrei C. Miu
Cogn Emot. 2015;29(1):64-75.
doi: 10.1080/02699931.2014.895300.

Abstract

Recent theories have argued that emotions play a central role in moral decision-making and suggested that emotion regulation may be crucial in reducing emotion-linked biases. The present studies focused on the influence of emotional experience and individual differences in emotion regulation on moral choice in dilemmas that pit harming another person against social welfare. During these "harm to save" moral dilemmas, participants experienced mostly fear and sadness but also other emotions such as compassion, guilt, anger, disgust, regret and contempt (Study 1). Fear and disgust were more frequently reported when participants made deontological choices, whereas regret was more frequently reported when participants made utilitarian choices. In addition, habitual reappraisal negatively predicted deontological choices, and this effect was significantly carried through emotional arousal (Study 2). Individual differences in the habitual use of other emotion regulation strategies (i.e., acceptance, rumination and catastrophising) did not influence moral choice. The results of the present studies indicate that negative emotions are commonly experienced during "harm to save" moral dilemmas, and they are associated with a deontological bias. By efficiently reducing emotional arousal, reappraisal can attenuate the emotion-linked deontological bias in moral choice.

General Discussion

Using H2S moral dilemmas, the present studies yielded three main findings: (1) a wide spectrum of emotions are experienced during these moral dilemmas, with self-focused emotions such as fear and sadness being the most common (Study 1); (2) there is a positive relation between emotional arousal during moral dilemmas and deontological choices (Studies 1 and 2); and (3) individual differences in reappraisal, but not other emotion regulation strategies (i.e., acceptance, rumination or catastrophising) are negatively associated with deontological choices and this effect is carried through emotional arousal (Study 2).

A pdf can be downloaded here.


Why a computer will never be truly conscious

Subhash Kak
The Conversation
Originally published October 16, 2019

Here is an excerpt:

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

The info is here.

Sunday, November 10, 2019

For whom does determinism undermine moral responsibility? Surveying the conditions for free will across cultures

Ivar Hannikainen and others
PsyArXiv Preprints
Originally published October 15, 2019

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

The research is here.

Saturday, November 9, 2019

Debunking (the) Retribution (Gap)

Steven R. Kraaijeveld
Science and Engineering Ethics
https://doi.org/10.1007/s11948-019-00148-6

Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Friday, November 8, 2019

Privacy is a collective concern

Carissa Veliz
newstatesman.com
Originally published 22 OCT 2019

People often give a personal explanation of whether they protect the privacy of their data. Those who don’t care much about privacy might say that they have nothing to hide. Those who do worry about it might say that keeping their personal data safe protects them from being harmed by hackers or unscrupulous companies. Both positions assume that caring about and protecting one’s privacy is a personal matter. This is a common misunderstanding.

It’s easy to assume that because some data is “personal”, protecting it is a private matter. But privacy is both a personal and a collective affair, because data is rarely used on an individual basis.

(cut)

Because we are intertwined in ways that make us vulnerable to each other, we are responsible for each other’s privacy. I might, for instance, be extremely careful with my phone number and physical address. But if you have me as a contact in your mobile phone and then give access to companies to that phone, my privacy will be at risk regardless of the precautions I have taken. This is why you shouldn’t store more sensitive data than necessary in your address book, post photos of others without their permission, or even expose your own privacy unnecessarily. When you expose information about yourself, you are almost always exposing information about others.

The info is here.

A Fake Psychologist Treated Troubled Children, Prosecutors Say

Fake Credentials on LinkedIn Page
Michal Gold
The New York Times
Originally published September 29, 2019

Here is an excerpt:

But Mr. Payne has no formal counseling training that prosecutors were aware of. He told investigators that he was a doctor with a “home-schooled, unconventional education during the Black Panther era,” according to court papers.

None of Mr. Payne’s patients had been hospitalized or physically harmed, an official said. Some of his patients liked him and his treatment methods.

But others became suspicious during therapy sessions. Mr. Payne would often talk about his own life and not ask patients about theirs, the official said. He would also repeat exercises and worksheets in some of his sessions with little explanation, giving patients the sense that he had run out of ideas to treat them.

According to prosecutors, Mr. Payne and Ms. Tobierre-Desir worked at three locations: his main office in a large building in Brooklyn Heights, a smaller building in Prospect-Lefferts Gardens and the offices of a nonprofit based at Kings County Hospital Center, one of the hospitals with which Mr. Payne claimed to be affiliated.

Mr. Payne’s relationship with the nonprofit, the Kings Against Violence Initiative, was unclear. The group did not respond to requests for comment on Friday.

The info is here.

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Are We Causing Moral Injury to Our Physician Workforce?

Carolyn Meltzer
theneuroethicsblog.com
Originally posted November 5, 2019

Here is an excerpt:

The term moral injury was coined by psychiatrist Jonathan Shay, MD PhD, who, while working at a Veterans Affairs hospital, noted that moral injury is present when 1) there is a betrayal of what is considered morally correct, 2) by someone who holds legitimate authority (conceptualized by Shay as “leadership malpractice”), and 3) in a high-stakes situation (Shay and Monroe 1998). Nash and Little (2013) went on to propose a model that identified the types of war-zone events that contributed to moral injury as witnessing events that are morally wrong (or strongly contradicted one’s own moral code), acting in ways that violate moral values, or feeling betrayed by those who were once trusted. In a fascinating study using the Moral Injury Event Scale and resting-state functional magnetic resonance imaging (fMRI), Sun and colleagues (2019) were able to discern a distinct pattern of altered functional neural connectivity in soldiers exposed to morally injurious events. In fact, functional connectivity between the left inferior parietal lobule and bilateral precuneus was positively related with the soldiers’ post-traumatic stress disorder (PTSD) symptoms and negatively related with scores on the Moral Injury Event Scale.

Moral injury has been recently applied as a construct for physician burnout. Those who argue for this framework propose that structural and cultural factors have contributed to physician burden by undervaluing physicians and over-relying on financial metrics (such as relative value units, RVUs) as the primary surrogate of physician productivity (Nurok and Gewertz 2019). Turner (2019) recently compared the military experience to that of physician providers. While one may draw similarities between the front line of healthcare delivery and that experienced by soldiers, Turner argues that a fundamental tenet of military leadership - that leaders eat last – provides effective support for the health of the workforce. In increasingly large healthcare organizations managed by administrators who may be distant from the front line and reliant on metrics of productivity, the necessary sense of empathy and support from leadership can seem lacking.

The info is here.

Wednesday, November 6, 2019

Insurance companies aren’t doctors. So why do we keep letting them practice medicine?

(iStock) (Minerva Studio/iStock)William E. Bennett Jr.
The Washington Post
Originally posted October 22, 2019

Here are two excerpts:

Here’s the thing: After a few minutes of pleasant chat with a doctor or pharmacist working for the insurance company, they almost always approve coverage and give me an approval number. There’s almost never a back-and-forth discussion; it’s just me saying a few key words to make sure the denial is reversed.

Because it ends up with the desired outcome, you might think this is reasonable. It’s not. On most occasions the “peer” reviewer is unqualified to make an assessment about the specific services.

They usually have minimal or incorrect information about the patient.

Not one has examined or spoken with the patient, as I have.

None of them have a long-term relationship with the patient and family, as I have.

The insurance company will say this system makes sure patients get the right medications. It doesn’t. It exists so that many patients will fail to get the medications they need.

(cut)

This is a system that saves insurance companies money by reflexively denying medical care that has been determined necessary by a physician.

And it should come as no surprise that denials have a disproportionate effect on vulnerable patient populations, such as sexual-minority youths and cancer patients.

We can do better. If physicians order too many expensive tests or drugs, there are better ways to improve their performance and practice, such as quality-improvement initiatives through electronic medical records.

When an insurance company reflexively denies care and then makes it difficult to appeal that denial, it is making health-care decisions for patients.

The info is here.