Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Dilemmas. Show all posts
Showing posts with label Moral Dilemmas. Show all posts

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.

Tuesday, September 19, 2017

The strategic moral self: Self-presentation shapes moral dilemma judgments

Sarah C. Roma and Paul Conway
Journal of Experimental Social Psychology
Volume 74, January 2018, Pages 24–37

Abstract

Research has focused on the cognitive and affective processes underpinning dilemma judgments where causing harm maximizes outcomes. Yet, recent work indicates that lay perceivers infer the processes behind others' judgments, raising two new questions: whether decision-makers accurately anticipate the inferences perceivers draw from their judgments (i.e., meta-insight), and, whether decision-makers strategically modify judgments to present themselves favorably. Across seven studies, a) people correctly anticipated how their dilemma judgments would influence perceivers' ratings of their warmth and competence, though self-ratings differed (Studies 1–3), b) people strategically shifted public (but not private) dilemma judgments to present themselves as warm or competent depending on which traits the situation favored (Studies 4–6), and, c) self-presentation strategies augmented perceptions of the weaker trait implied by their judgment (Study 7). These results suggest that moral dilemma judgments arise out of more than just basic cognitive and affective processes; complex social considerations causally contribute to dilemma decision-making.

The article is here.

Monday, August 28, 2017

Sometimes giving a person a choice is an act of terrible cruelty

Lisa Tessman
aeon.com
Originally posted August 9, 2017

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms. 

Sometimes, making a choice works well. For instance, imagine that while leaving the supermarket parking lot you accidentally back into another car, visibly denting it. No one else is around, nor do you think there are any surveillance cameras. You face a choice: you could drive away, fairly confident that no one will ever find out that you damaged someone’s property, or you could leave a note on the dented car’s windshield, explaining what happened and giving contact information, so that you can compensate the car’s owner.

Obviously, the right thing to do is to leave a note. If you don’t do this, you’ve committed a wrongdoing that you could have avoided just by making a different choice. Even though you might not like having to take responsibility – and paying up – it’s good to be in the position of being able to do the right thing.

Yet sometimes, having a choice means deciding to commit one bad act or another. Imagine being a doctor or nurse caught in the following fictionalised version of real events at a hospital in New Orleans in the aftermath of Hurricane Katrina in 2005. Due to a tremendous level of flooding after the hurricane, the hospital must be evacuated. The medical staff have been ordered to get everyone out by the end of the day, but not all patients can be removed. As time runs out, it becomes clear that you have a choice, but it’s a choice between two horrifying options: euthanise the remaining patients without consent (because many of them are in a condition that renders them unable to give it) or abandon them to suffer a slow, painful and terrifying death alone. Even if you’re anguished at the thought of making either choice, you might be confident that one action – let’s say administering a lethal dose of drugs – is better than the other. Nevertheless, you might have the sense that no matter which action you perform, you’ll be violating a moral requirement.

Wednesday, June 7, 2017

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Monday, May 29, 2017

Moral Hindsight

Nadine Fleischhut, Björn Meder, & Gerd Gigerenzer
Experimental Psychology (2017), 64, pp. 110-123.

Abstract.

How are judgments in moral dilemmas affected by uncertainty, as opposed to certainty? We tested the predictions of a consequentialist and deontological account using a hindsight paradigm. The key result is a hindsight effect in moral judgment. Participants in foresight, for whom the occurrence of negative side effects was uncertain, judged actions to be morally more permissible than participants in hindsight, who knew that negative side effects occurred. Conversely, when hindsight participants knew that no negative side effects occurred, they judged actions to be more permissible than participants in foresight. The second finding was a classical hindsight effect in probability estimates and a systematic relation between moral judgments and probability estimates. Importantly, while the hindsight effect in probability estimates was always present, a corresponding hindsight effect in moral judgments was only observed among “consequentialist” participants who indicated a cost-benefit trade-off as most important for their moral evaluation.

The article is here.

Monday, November 14, 2016

Walter Sinnott-Armstrong discusses artificial intelligence and morality

By Joyce Er
Duke Chronicle
Originally published October 25, 2016

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars.

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

The article is here.

Saturday, November 12, 2016

Moral Dilemmas and Guilt

Patricia S. Greenspan
Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition
Vol. 43, No. 1 (Jan., 1983), pp. 117-125

In 'Moral dilemmas and ethical consistency', Ruth Marcus argues that moral dilemmas are 'real': there are cases where an agent ought to perform each of two incompatible actions.  Thus, a doctor with two patients equally in need of his attention ought to save each, even though he cannot save both. By
claiming that his dilemma is real, I take Marcus to be denying (rightly) that it is merely epistemic - a matter of uncertainty as to which patient to save.  Rather, she wants to say, the moral code yields two opposing recommendations, both telling him what he ought to do. The code is not inconsistent,
however, as long as its rules are all obeyable in some possible world; and it is not deficient as a guide to action, as long as it contains a second order principle, directing an agent to avoid situations of conflict. Where a dilemma does arise, though, the agent is guilty no matter what he does.

This last point seems implausible for the doctor's case; but here I shall consider a case which does fit Marcus's comments on guilt - if not all her views on the nature of moral dilemma.  I think that she errs, first of all, in counting as a dilemma any case where there are some considerations favoring each of two incompatible actions, even if it is clear that one of them is right. For instance, in the case of withholding weapons from someone who has gone mad, it would be unreasonable for the agent to feel guilty about breaking his promise, since he has done exactly as he should. But secondly, even in
Marcus's 'strong' cases, I do not think that dilemmas must be taken as yielding opposing all-things-considered ought-judgments, viewed as recommendations for action, rather than stopping with judgments of obligation, or reports of commitments. The latter do not imply 'can' (in the sense of physical possibility); and where they are jointly unsatisfiable, and supported by reasons of equal weight, I think we should say that the moral code yields no particular recommendations, rather than two which conflict.

The article is here.

Friday, October 28, 2016

How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

Zachary Horne and Derek Powell
PLoS ONE
Originally published: July 6, 2016

Abstract

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely accepted that emotions are not simply experienced alongside people’s judgments about moral dilemmas, but that our affective processes play a central role in determining those judgments. However, much of the evidence purporting to demonstrate the connection between people’s emotional responses and their judgments about moral dilemmas has recently been called into question. In the present studies, we reexamined the role of emotion in people’s judgments about moral dilemmas using a validated self-report measure of emotion. We measured participants’ specific emotional responses to moral dilemmas and, although we found that moral dilemmas evoked strong emotional responses, we found that these responses were only weakly correlated with participants’ moral judgments. We argue that the purportedly strong connection between emotion and judgments of moral dilemmas may have been overestimated.

The article is here.

Friday, September 30, 2016

Gender Differences in Responses to Moral Dilemmas: A Process Dissociation Analysis

Rebecca Friesdorf, Paul Conway, and Bertram Gawronski
Pers Soc Psychol Bull, first published on April 3, 2015
doi:10.1177/0146167215575731

Abstract

The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.

The article is here.

Tuesday, July 26, 2016

How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

Horne Z, Powell D (2016)
PLoS ONE 11(7): e0154780.
doi: 10.1371/journal.pone.0154780

Abstract

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely
accepted that emotions are not simply experienced alongside people’s judgments about
moral dilemmas, but that our affective processes play a central role in determining those
judgments. However, much of the evidence purporting to demonstrate the connection
between people’s emotional responses and their judgments about moral dilemmas has
recently been called into question. In the present studies, we reexamined the role of emotion
in people’s judgments about moral dilemmas using a validated self-report measure of
emotion. We measured participants’ specific emotional responses to moral dilemmas and,
although we found that moral dilemmas evoked strong emotional responses, we found that
these responses were only weakly correlated with participants’ moral judgments. We argue
that the purportedly strong connection between emotion and judgments of moral dilemmas
may have been overestimated.

The article is here.

Thursday, October 15, 2015

How stress influences our morality

By Lucius Caviola & Nadira S. Faber
The Inquisitive Mind
Issues 23, 2014

Here is an excerpt:

Moral judgments seem to be affected by stress only when the situation elicits an emotional reaction strong enough to be impacted by the stress reactions such as trolley-like personal moral dilemmas. For example, Starcke, Polzer, Wolf, and Brand (2011) used everyday moral dilemmas that were less extreme compared to the trolley dilemma, for example, asking participants whether they would leave a message to the owner of a car that they had accidentally scratched. They did observe an association between people’s cortisol levels and egoistic judgments in those dilemmas considered to be most emotional. However, the researchers failed to find a significant difference in judgments between stressed and non-stressed participants, presumably because the moral vignettes used in this study did not elicit emotions that were strong enough to cause a difference compared to trolley-like personal moral dilemmas.

Nonetheless, many of us are confronted with highly emotional moral situations in real life in which our judgments could be influenced by stress. For example, people might be more prone to help a child beggar on the street if they feel stressed after an uncomfortable meeting at work. Even more worryingly, doctors who face life-and-death decisions might be influenced by the daily stress they experience.

The entire article is here.

Tuesday, June 16, 2015

Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas

I. Patil, C. Cogoni, N. Zangrando, L. Chittaro, and G. Silani
Social Neuroscience, 2014
Vol. 9, No. 1, 94-107

Abstract

Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their nonutilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.

The entire article is here.

Friday, May 8, 2015

TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing

Jeurissen D, Sack AT, Roebroeck A, Russ BE and Pascual-Leone A (2014) TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing.
Front. Neurosci. 8:18. doi: 10.3389/fnins.2014.00018

Decision-making involves a complex interplay of emotional responses and reasoning processes. In this study, we use TMS to explore the neurobiological substrates of moral decisions in humans. To examining the effects of TMS on the outcome of a moral-decision, we compare the decision outcome of moral-personal and moral-impersonal dilemmas to each other and examine the differential effects of applying TMS over the right DLPFC or right TPJ. In this comparison, we find that the TMS-induced disruption of the DLPFC during the decision process, affects the outcome of the moral-personal judgment, while TMS-induced disruption of TPJ affects only moral-impersonal conditions. In other words, we find a double-dissociation between DLPFC and TPJ in the outcome of a moral decision. Furthermore, we find that TMS-induced disruption of the DLPFC during non-moral, moral-impersonal, and moral-personal decisions lead to lower ratings of regret about the decision. Our results are in line with the dual-process theory and suggest a role for both the emotional response and cognitive reasoning process in moral judgment. Both the emotional and cognitive processes were shown to be involved in the decision outcome.

The entire article is here.

Wednesday, May 6, 2015

What we say and what we do: The relationship between real and hypothetical moral choices

By O. FeldmanHall, D. Mobbs, D. Evans, L. Hiscox, L. Navrady, & T. Dalgleish
Cognition, Volume 123, Issue 3, June 2012, Pages 434–441

Abstract

Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.

The entire article is here.

Friday, April 24, 2015

Gender Differences in Responses to Moral Dilemmas

By Rebecca Riesdorf, Paul Conway, and Bertram Gawronski
Pers Soc Psychol Bull April 3, 2015

Abstract

The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.

The entire article is here.

Wednesday, March 4, 2015

‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good

By Guy Kahane, Jim A.C. Everett, Brian Earp, Miguel Farias, and Julian Savulescu
Cognition
Volume 134, January 2015, Pages 193–209

Abstract

A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.

Highlights

• Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.

The entire article is here.

Sunday, March 1, 2015

Online processing of moral transgressions: ERP evidence for spontaneous evaluation

Hartmut Leuthold, Angelika Kunkel, Ian G. Mackenzie and Ruth Filik
Soc Cogn Affect Neurosci (2015)
doi: 10.1093/scan/nsu151

Abstract

Experimental studies using fictional moral dilemmas indicate that both automatic emotional processes and controlled cognitive processes contribute to moral judgments. However, not much is known about how people process socio-normative violations that are more common to their everyday life nor the time-course of these processes. Thus, we recorded participants’ electrical brain activity while they were reading vignettes that either contained morally acceptable vs unacceptable information or text materials that contained information which was either consistent or inconsistent with their general world knowledge. A first event-related brain potential (ERP) positivity peaking at ∼200 ms after critical word onset (P200) was larger when this word involved a socio-normative or knowledge-based violation. Subsequently, knowledge-inconsistent words triggered a larger centroparietal ERP negativity at ∼320 ms (N400), indicating an influence on meaning construction. In contrast, a larger ERP positivity (larger late positivity), which also started at ∼320 ms after critical word onset, was elicited by morally unacceptable compared with acceptable words. We take this ERP positivity to reflect an implicit evaluative (good–bad) categorization process that is engaged during the online processing of moral transgressions.

The article is here.

Tuesday, December 23, 2014

Harm to others outweighs harm to self in moral decision making

Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayand, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111

Abstract

Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.

Significance

Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior.  Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.

The entire article is here.

Tuesday, December 9, 2014

What we say and what we do: The relationship between real and hypothetical moral choices

By Oriel FeldmanHall, Dean Mobbs, Davy Evans, Lucy Hiscox, Lauren Navrady, & Tim Dalgleish
Cognition. Jun 2012; 123(3): 434–441.
doi:  10.1016/j.cognition.2012.02.001

Abstract

Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.

Highlights

    We show people are unable to appropriately judge outcomes of moral behaviour. 

  • Moral beliefs have weaker impact when there is a presence of significant self-gain. 
  • People make highly self-serving choices in real moral situations. 
  • Real moral choices contradict responses to simple hypothetical moral probes. 
  • Enhancing context can cause hypothetical decisions to mirror real moral decisions.

Monday, December 8, 2014

Harm to others outweighs harm to self in moral decision making

By Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayan, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111

Abstract

Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.


Significance

Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.

The entire article is here.