Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Dilemma. Show all posts
Showing posts with label Moral Dilemma. Show all posts

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499

Abstract

A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.


My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Friday, April 14, 2023

The moral authority of ChatGPT

Krügel, S., Ostermaier, A., & Uhl, M.
arxiv.org
Posted in 2023

Abstract

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users’ judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.

Discussion

We find that ChatGPT readily dispenses moral advice although it lacks a firm moral stance. Indeed, the chatbot gives randomly opposite advice on the same moral issue.  Nonetheless, ChatGPT’s advice influences users’ moral judgment. Moreover, users underestimate ChatGPT’s influence and adopt its random moral stance as their own. Hence, ChatGPT threatens to corrupt rather than promises to improve moral judgment. Transparency is often proposed as a means to ensure the responsible use of AI. However, transparency about ChatGPT being a bot that imitates human speech does not turn out to affect how much it influences users.

Our results raise the question of how to ensure the responsible use of AI if transparency is not good enough. Rules that preclude the AI from answering certain questions are a questionable remedy. ChatGPT has such rules but can be brought to break them. Prior evidence suggests that users are careful about AI once they have seen it err. However, we probably should not count on users to find out about ChatGPT’s inconsistency through repeated interaction. The best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI.

Tuesday, February 28, 2023

Transformative experience and the right to revelatory autonomy

Farbod Akhlaghi
Analysis
Originally Published: 31 December 2022

Abstract

Sometimes it is not us but those to whom we stand in special relations that face transformative choices: our friends, family or beloved. A focus upon first-personal rational choice and agency has left crucial ethical questions regarding what we owe to those who face transformative choices largely unexplored. In this paper I ask: under what conditions, if any, is it morally permissible to interfere to try to prevent another from making a transformative choice? Some seemingly plausible answers to this question fail precisely because they concern transformative experiences. I argue that we have a distinctive moral right to revelatory autonomy grounded in the value of autonomous self-making. If this right is outweighed then, I argue, interfering to prevent another making a transformative choice is permissible. This conditional answer lays the groundwork for a promising ethics of transformative experience.

Conclusion

Ethical questions regarding transformative experiences are morally urgent. A complete answer to our question requires ascertaining precisely how strong the right to revelatory autonomy is and what competing considerations can outweigh it. These are questions for another time, where the moral significance of revelation and self-making, the competing weight of moral and non-moral considerations, and the sense in which some transformative choices are more significant to one’s identity and self-making than others must be further explored.

But to identify the right to revelatory autonomy and duty of revelatory non-interference is significant progress. For it provides a framework to address the ethics of transformative experience that avoids complications arising from the epistemic peculiarities of transformative experiences. It also allows us to explain cases where we are permitted to interfere in another’s transformative choice and why interference in some choices is harder to justify than others, whilst recognizing plausible grounds for the right to revelatory autonomy itself in the moral value of autonomous self-making. This framework, moreover, opens novel avenues of engagement with wider ethical issues regarding transformative experience, for example concerning social justice or surrogate transformative choice-making. It is, at the very least, a view worthy of further consideration.


This reasoning applies to psychologists in psychotherapy.  Unless significant danger is present, psychologists need to avoid intrusive advocacy, meaning pulling autonomy away from the patient.  Soft paternalism can occur in psychotherapy, when trying to avoid significant harm.

Friday, October 8, 2021

Can induced reflection affect moral decision-making

Daniel Spears, et al. (2021) 
Philosophical Psychology, 34:1, 28-46, 
DOI: 10.1080/09515089.2020.1861234

Abstract

Evidence about whether reflective thinking may be induced and whether it affects utilitarian choices is inconclusive. Research suggests that answering items correctly in the Cognitive Reflection Test (CRT) before responding to dilemmas may lead to more utilitarian decisions. However, it is unclear to what extent this effect is driven by the inhibition of intuitive wrong responses (reflection) versus the requirement to engage in deliberative processing. To clarify this issue, participants completed either the CRT or the Berlin Numeracy Test (BNT) – which does not require reflection – before responding to moral dilemmas. To distinguish between the potential effect of participants’ previous reflective traits and that of performing a task that can increase reflectivity, we manipulated whether participants received feedback for incorrect items. Findings revealed that both CRT and BNT scores predicted utilitarian decisions when feedback was not provided. Additionally, feedback enhanced performance for both tasks, although it only increased utilitarian decisions when it was linked to the BNT. Taken together, these results suggest that performance in a numeric task that requires deliberative thinking may predict utilitarian responses to moral dilemmas. The finding that feedback increased utilitarian decisions only in the case of BNT casts doubt upon the reflective-utilitarian link.

From the General Discussion

Our data, however, did not fully support these predictions. Although feedback resulted in more utilitarian responses to moral dilemmas, this effect was mostly attributable to feedback on the BNT.  The effect was  not  attributable to differences in baseline task-performance. Additionally, both CRT and BNT scores predicted utilitarian responses when feedback was not provided. That  performance in the CRT predicts  utilitarian decisions is in agreement with a previous study linking cognitive reflection to utilitarian choice (Paxton et al., 2012; but see Sirota, Kostovicova, Juanchich, & Dewberry, pre-print, for the absence of effect when using a verbal CRT without numeric component).

Wednesday, March 25, 2020

COVID-19 and the Impossibility of Morality

John Danaher
philosophical disquisitions
Originally published 16 March 20

The stories coming out of Italy over the past two weeks have been chilling. With their healthcare system overwhelmed by COVID-19 cases, Italian doctors are facing tragic triage decisions on a daily basis. In severe cases of COVID-19 patients need ventilators to survive. But there are only so many ventilators to go around. What if you don’t have enough? Who should you save? The 80 year old with COPD and other medical complications or the slightly healthier 50 year old without them? The 45 year old mother of two or the 55 year old single man? The 29 year old healthcare worker or the 38 year old diabetes patient?

Questions like these might sound like thought experiments cooked up in a first year ethics class, but they are not. Indeed, decision-making of this sort is not uncommon in crisis situations. For example, infamous tales are told about what happened at the Memorial Medical Center in New Orleans during Hurricane Katrina in 2005. With rising flood waters, no electricity and several critically ill patients who could not be evacuated, medical workers at Memorial had to make some tough decisions: abandon patients and leave them die in agony or administer euthanizing drugs to end their suffering more quickly? The suspicion is that many chose the latter course of action.

And medical decisions are just the tip of the iceberg. As we are all now being asked to isolate ourselves for the common good, many of us will find ourselves confronting similar, albeit less high stakes decisions. Which is more important: my duty to care for my elderly parents or my duty to protect them (and others) from potential transmission of disease? My duty to work to ensure that other people have the essential services they need or my duty to myself and my family to protect them from illness? We may not like to ask these questions, but we cannot avoid them.

But what are the answers? What should people do in cases like this? I don't know that I have much in the way of specific guidance to offer, but I do have a point that I think is worth making. It's at times like this that the essentially tragic nature of much moral decision-making reveals itself. This tragedy lurks in the background most of the time, but it is brought into sharp relief at times like this. Once we are aware of this ineluctable tragedy we might be inclined to change some of our common moral practices. We might be less inclined to blame others for the choices they make; and we might be more conscious of the pain of moral regret.

The info is here.

Saturday, November 18, 2017

Differential inter-subject correlation of brain activity when kinship is a variable in moral dilemma

Mareike Bacha-Trams, Enrico Glerean, Robin Dunbar, Juha M. Lahnakoski, and others
Scientific Reports 7, Article number: 14244

Abstract

Previous behavioural studies have shown that humans act more altruistically towards kin. Whether and how knowledge of genetic relatedness translates into differential neurocognitive evaluation of observed social interactions has remained an open question. Here, we investigated how the human brain is engaged when viewing a moral dilemma between genetic vs. non-genetic sisters. During functional magnetic resonance imaging, a movie was shown, depicting refusal of organ donation between two sisters, with subjects guided to believe the sisters were related either genetically or by adoption. Although 90% of the subjects self-reported that genetic relationship was not relevant, their brain activity told a different story. Comparing correlations of brain activity across all subject pairs between the two viewing conditions, we found significantly stronger inter-subject correlations in insula, cingulate, medial and lateral prefrontal, superior temporal, and superior parietal cortices, when the subjects believed that the sisters were genetically related. Cognitive functions previously associated with these areas include moral and emotional conflict regulation, decision making, and mentalizing, suggesting more similar engagement of such functions when observing refusal of altruism from a genetic sister. Our results show that mere knowledge of a genetic relationship between interacting persons robustly modulates social cognition of the perceiver.

The article is here.

Saturday, November 11, 2017

Did I just feed an addiction? Or ease a man’s pain? Welcome to modern medicine’s moral cage fight

Jay Baruch
STAT News
Originally published October 23, 2017

Here are two excerpts:

Will the opioid pills Sonny is asking for treat his pain, feed an addiction, or both? Will prescribing it fulfill my moral responsibility to alleviate his distress, contribute to the supply chain in the illicit pill economy, or both? Prescribing guidelines from the Centers for Disease Control and Prevention and recommendations from medical specialties and local hospitals are well-intentioned and necessary. But they do little to address the central anxiety that makes this decision a source of distress for physicians like me. It’s hard to evaluate pain without making some judgment about the patient and the patient’s story.

(cut)

A good story shortcuts analytical thinking. It can work its charms without our knowledge and sometimes against our better judgment. Once an emotional connection is made and the listener becomes invested in the story, the believability of the story matters less. In fact, the more extreme the story, the greater its capacity to enthrall the listener or reader.

Stories can elicit empathy and influence behavior in part by stimulating the release of the neurotransmitter oxytocin, which has ties to generosity, trustworthiness, and mother-infant bonding. I’m intrigued by the possibility that clinicians’ vulnerability to deceit is often grounded in the empathy they are reported to be lacking.

The article is here.

Wednesday, October 11, 2017

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Thursday, May 25, 2017

In a moral dilemma, choose the one you love: Impartial actors are seen as less moral than partial ones

Jamie S. Hughes
British Journal of Social Psychology

Abstract

Although impartiality and concern for the greater good are lauded by utilitarian philosophies, it was predicted that when values conflict, those who acted impartially rather than partially would be viewed as less moral. Across four studies, using life-or-death scenarios and more mundane ones, support for the idea that relationship obligations are important in moral attribution was found. In Studies 1–3, participants rated an impartial actor as less morally good and his or her action as less moral compared to a partial actor. Experimental and correlational evidence showed the effect was driven by inferences about an actor's capacity for empathy and compassion. In Study 4, the relationship obligation hypothesis was refined. The data suggested that violations of relationship obligations are perceived as moral as long as strong alternative justifications sanction them. Discussion centres on the importance of relationships in understanding moral attributions.

The article is here.

Tuesday, February 14, 2017

Are Kantians Better Social Partners? People Making Deontological Judgments are Perceived to Be More Prosocial than They Actually are

Capraro, Valerio and Sippel, Jonathan and Zhao, Bonan and others
(January 25, 2017).

Abstract

Why do people make deontological decisions, although they often lead to overall unfavorable outcomes? One account is receiving considerable attention: deontological judgments may signal commitment to prosociality and thus may increase people's chances of being selected as social partners --- which carries obvious long-term benefits. Here we test this framework by experimentally exploring whether people making deontological judgments are expected to be more prosocial than those making consequentialist judgments and whether they are actually so. We use two ways of identifying deontological choices. In a first set of three studies, we use a single moral dilemma whose consequentialist course of action requires a strong violation of Kant's practical imperative that humans should never be used solely as a mere means. In a second set of two studies, we use two moral dilemmas: one whose consequentialist course of action requires no violation of the practical imperative, and one whose consequentialist course of action requires a strong violation of the practical imperative; and we focus on people changing decision when passing from the former dilemma to the latter one, thereby revealing a strong reluctance to violate Kant's imperative. Using economic games, we take three measures of prosociality: trustworthiness, altruism, and cooperation. Our results procure converging evidence for a perception bias according to which people making deontological choices are believed to be more prosocial than those making consequentialist choices, but actually they are not so. Thus, these results provide a piece of evidence against the assumption that deontological judgments signal commitment to prosociality.

The article is here.

Monday, August 29, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

By Jacob Brogan
Future Tense
Originally published August 11, 2016

Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?

It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way. You’ll have to decide whether a vehicle should choose its passengers or people in an intersection. Others will present two differently composed groups of pedestrians—say, a handful of female doctors or a collection of besuited men—and ask which an empty car should slam into. Further complications—including the presence of animals and details about whether the pedestrians have the right of way—sometimes further muddle the question.

Tuesday, April 26, 2016

Inference of Trustworthiness from Intuitive Moral Judgments

Jim A. C. Everett, David A. Pizarro, M. J. Crockett.
Journal of Experimental Psychology: General, 2016
DOI: 10.1037/xge0000165

Abstract

Moral judgments play a critical role in motivating and enforcing human cooperation. Research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across five studies we show that people who make characteristically deontological judgments (as opposed to judgments that align with other ethical traditions) are preferred as social partners, perceived as more moral and trustworthy, and trusted more in economic games. These findings provide empirical support for a partner choice account for why intuitive moral judgments often align with deontological theories.

The article can be downloaded here.

Saturday, August 1, 2015

Dilemma 33: Breaking Bad (or Good)

Dr. Jesse Pinkman has been working with a 26-year-old professional for about a year, Ms. Skyler White. They have been working on managing her symptoms of depression and anxiety.  The patient smokes marijuana regularly, which has been a concern for Dr. Pinkman.

Skyler arrives late to her appointment, looking frazzled.  She explained her friend overdosed on heroin the prior evening.  She has been in the ER for the past 12 hours.  Her friend will likely survive, but she may have residual cognitive problems.

Skyler reported feeling horribly guilty because she introduced her friend to her next door neighbor, who is the drug dealer.  Her friend always stops by to see Skyler first, before purchasing drugs. Skyler purchases her marijuana from the same dealer.

After processing the events of the previous evening, Skyler stated she will move away from the drug dealer.  She no longer wants to be this close or indirectly cause harm to someone else.  The police are actively investigating, but Skyler does not want to divulge any information.  She does not want to get involved.  Skyler makes an appointment for next week, and then leaves feeling somewhat better.

Dr. Pinkman becomes preoccupied about what Skyler reported.  Dr. Pinkman knows the dealer’s name from previous sessions and can figure out the address of dealer, based on his patient’s address.

Dr. Pinkman is contemplating calling in an anonymous tip to the police.  Dr. Pinkman is aware of the increase in heroin use in his community.  He also recognizes his struggle with moral outrage and sense of injustice in this situation.  Struggling with the emotions to report or not report anonymously, Dr. Pinkman calls you for a consultation.

What are the competing ethical principles in this situation?

How would you feel if you were Dr. Pinkman?

What are some of the positive and negative consequences about Dr. Pinkman making the anonymous report?

How do your own professional values and personal morals influence how you would respond to Dr. Pinkman?

How would you respond to Dr. Pinkman’s moral outrage?

Would your answers differ if the friend died?

Would your answers differ if the patient was of low socio-economic status?

Would your answers differ if Skyler were a teenager?

Thursday, November 13, 2014

The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas

Aaron A. Duke and Laurent Bègueb
Cognition
Volume 134, January 2015, Pages 121–127

Highlights

• Greene’s dual-process theory of moral reasoning needs revision.
• Blood alcohol concentration is positively correlated with utilitarianism.
• Self-reported disinhibition is positively correlated with utilitarianism.
• Decreased empathy predicts utilitarianism better than increased deliberation.

Abstract

The hypothetical moral dilemma known as the trolley problem has become a methodological cornerstone in the psychological study of moral reasoning and yet, there remains considerable debate as to the meaning of utilitarian responding in these scenarios. It is unclear whether utilitarian responding results primarily from increased deliberative reasoning capacity or from decreased aversion to harming others. In order to clarify this question, we conducted two field studies to examine the effects of alcohol intoxication on utilitarian responding. Alcohol holds promise in clarifying the above debate because it impairs both social cognition (i.e., empathy) and higher-order executive functioning. Hence, the direction of the association between alcohol and utilitarian vs. non-utilitarian responding should inform the relative importance of both deliberative and social processing systems in influencing utilitarian preference. In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France, participants were presented with a moral dilemma assessing their willingness to sacrifice one life to save five others. Participants’ blood alcohol concentrations were found to positively correlate with utilitarian preferences (r = .31, p < .001) suggesting a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma. Implications for Greene’s dual-process model of moral reasoning are discussed.

Tuesday, October 14, 2014

The Moral Instinct

By Steven Pinker
The New York Times
Originally posted January 13, 2013

Here is an excerpt:

The Moralization Switch

The starting point for appreciating that there is a distinctive part of our psychology for morality is seeing how moral judgments differ from other kinds of opinions we have on how people ought to behave. Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).

The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, “I don’t like brussels sprouts, but I don’t care if you eat them,” but no one would say, “I don’t like killing, but I don’t care if you murder someone.”

The entire article is here.

Sunday, October 5, 2014

Whistleblowing and the Bioethicist’s Public Obligations

By D. Robert MacDougall
Cambridge Quarterly of Healthcare Ethics / Volume 23 / Issue 04 / October 2014, pp 431-442

Abstract:

Bioethicists are sometimes thought to have heightened obligations by virtue of the fact that their professional role addresses ethics or morals. For this reason it has been argued that bioethicists ought to “whistleblow”—that is, publicly expose the wrongful or potentially harmful activities of their employer—more often than do other kinds of employees. This article argues that bioethicists do indeed have a heightened obligation to whistleblow, but not because bioethicists have heightened moral obligations in general. Rather, the special duties of bioethicists to act as whistleblowers are best understood by examining the nature of the ethical dilemma typically encountered by private employees and showing why bioethicists do not encounter this dilemma in the same way. Whistleblowing is usually understood as a moral dilemma involving conflicting duties to two parties: the public and a private employer. However, this article argues that this way of understanding whistleblowing has the implication that professions whose members identify their employer as the public—such as government employees or public servants—cannot consider whistleblowing a moral dilemma, because obligations are ultimately owed to only one party: the public. The article contends that bioethicists—even when privately employed—are similar to government employees in the sense that they do not have obligations to defer to the judgments of those with private interests. Consequently, bioethicists may be considered to have a special duty to whistleblow, although for different reasons than those usually cited.

The entire article is here, behind a paywall.