Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Decision-making. Show all posts
Showing posts with label Moral Decision-making. Show all posts

Saturday, November 11, 2023

Discordant benevolence: How and why people help others in the face of conflicting values.

Cowan, S. K., Bruce, T. C., et al. (2022).
Science Advances, 8(7).

Abstract

What happens when a request for help from friends or family members invokes conflicting values? In answering this question, we integrate and extend two literatures: support provision within social networks and moral decision-making. We examine the willingness of Americans who deem abortion immoral to help a close friend or family member seeking one. Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, we find that a substantial minority of Americans morally opposed to abortion would enact what we call discordant benevolence: providing help when doing so conflicts with personal values. People negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion. This endeavor reveals both how personal values affect social support processes and how the nature of interaction shapes outcomes of moral decision-making.

Here is my summary:

Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, the authors find that a substantial minority of Americans morally opposed to abortion would enact discordant benevolence. They also find that people negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion.

Commiseration involves understanding and sharing the other person's perspective, even if one does not agree with it. Exemption involves excusing oneself from helping, perhaps by claiming ignorance or lack of resources. Discretion involves helping in a way that minimizes the conflict with one's own values, such as by providing emotional support or practical assistance but not financial assistance.

The authors argue that discordant benevolence is a complex phenomenon that reflects the interplay of personal values, social relationships, and moral decision-making. They conclude that discordant benevolence is a significant form of social support, even in cases where it is motivated by conflicting values.

In other words, the research suggests that people are willing to help others in need, even if it means violating their own personal values. This is because people also value social relationships and helping others. They may do this by discriminating among types of help or by exercising commiseration, exemption, or discretion.

Thursday, January 27, 2022

Many heads are more utilitarian than one

Keshmirian, A., Deroy, O, & Bahrami, B.
Cognition
Volume 220, March 2022, 104965

Abstract

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions. To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character's action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not. In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. In Experiment 2, we tested this hypothesis more directly: measuring participants' state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction. The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

Highlights

• Collective consensual judgments made via group interactions were more utilitarian than individual judgments.

• Group discussion did not change the individual judgments indicating a normative conformity effect.

• Individuals consented to a group judgment that they did not necessarily buy into personally.

• Collectives were less stressed than individuals after responding to moral dilemmas.

• Interactions reduced aversive emotions (e.g., stressed) associated with violation of moral norms.

From the Discussion

Our analysis revealed that groups, in comparison to individuals, are more utilitarian in their moral judgments. Thus, our findings are inconsistent with Virtue-Signaling (VS), which proposed the opposite
effect. Crucially, the collective utilitarian boost was short-lived: it was only seen at the collective level and not when participants rated the same questions individually again. Previous research shows that moral change at the individual level, as the result of social deliberation, is rather long-lived and not transient (e.g., see Ueshima et al., 2021). Thus, this collective utilitarian boost could not have resulted from deliberation and reasoning or due to conscious application of utilitarian principles with authentic reasons to maximize the total good. If this was the case, the effect would have persisted in the second individual judgment as well. That was not what we observed. Consequently, our findings are inconsistent with the Social Deliberation (SD) hypotheses.

Sunday, February 28, 2021

How peer influence shapes value computation in moral decision-making

Yu, H., Siegel, J., Clithero, J., & Crockett, M. 
(2021, January 16).

Abstract

Moral behavior is susceptible to peer influence. How does information from peers influence moral preferences? We used drift-diffusion modeling to show that peer influence changes the value of moral behavior by prioritizing the choice attributes that align with peers’ goals. Study 1 (N = 100; preregistered) showed that participants accurately inferred the goals of prosocial and antisocial peers when observing their moral decisions. In Study 2 (N = 68), participants made moral decisions before and after observing the decisions of a prosocial or antisocial peer. Peer observation caused participants’ own preferences to resemble those of their peers. This peer influence effect on value computation manifested as an increased weight on choice attributes promoting the peers’ goals that occurred independently from peer influence on initial choice bias. Participants’ self-reported awareness of influence tracked more closely with computational measures of prosocial than antisocial influence. Our findings have implications for bolstering and blocking the effects of prosocial and antisocial influence on moral behavior.

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 

Abstract

People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Sunday, November 22, 2020

The logic of universalization guides moral judgment

Levine, S., et al.
PNAS October 20, 2020 
117 (42) 26158-26169; 
first published October 2, 2020; 

Abstract

To explain why an action is wrong, we sometimes say, “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that, alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

Significance

Humans have several different ways to decide whether an action is wrong: We might ask whether it causes harm or whether it breaks a rule. Moral psychology attempts to understand the mechanisms that underlie moral judgments. Inspired by theories of “universalization” in moral philosophy, we describe a mechanism that is complementary to existing approaches, demonstrate it in both adults and children, and formalize a precise account of its cognitive mechanisms. Specifically, we show that, when making judgments in novel circumstances, people adopt moral rules that would lead to better consequences if (hypothetically) universalized. Universalization may play a key role in allowing people to construct new moral rules when confronting social dilemmas such as voting and environmental stewardship.

Sunday, March 22, 2020

Our moral instincts don’t match this crisis

Yascha Mounk
The Atlantic
Originally posted March 19, 2020

Here is an excerpt:

There are at least three straightforward explanations.

The first has to do with simple ignorance. For those of us who have spent the past weeks obsessing about every last headline regarding the evolution of the crisis, it can be easy to forget that many of our fellow citizens simply don’t follow the news with the same regularity—or that they tune into radio shows and television networks that have, shamefully, been downplaying the extent of the public-health emergency. People crowding into restaurants or hanging out in big groups, then, may simply fail to realize the severity of the pandemic. Their sin is honest ignorance.

The second explanation has to do with selfishness. Going out for trivial reasons imposes a real risk on those who will likely die if they contract the disease. Though the coronavirus does kill some young people, preliminary data from China and Italy suggest that they are, on average, less strongly affected by it. For those who are far more likely to survive, it is—from a purely selfish perspective—less obviously irrational to chance such social encounters.

The third explanation has to do with the human tendency to make sacrifices for the suffering that is right in front of our eyes, but not the suffering that is distant or difficult to see.

The philosopher Peter Singer presented a simple thought experiment in a famous paper. If you went for a walk in a park, and saw a little girl drowning in a pond, you would likely feel that you should help her, even if you might ruin your fancy shirt. Most people recognize a moral obligation to help another at relatively little cost to themselves.

Then Singer imagined a different scenario. What if a girl was in mortal danger halfway across the world, and you could save her by donating the same amount of money it would take to buy that fancy shirt? The moral obligation to help, he argued, would be the same: The life of the distant girl is just as important, and the cost to you just as small. And yet, most people would not feel the same obligation to intervene.

The same might apply in the time of COVID-19. Those refusing to stay home may not know the victims of their actions, even if they are geographically proximate, and might never find out about the terrible consequences of what they did. Distance makes them unjustifiably callous.

The info is here.

Tuesday, November 12, 2019

Effect of Psilocybin on Empathy and Moral Decision-Making

Thomas Pokorny, Katrin H Preller, & others
International Journal of Neuropsychopharmacology, 
Volume 20, Issue 9, September 2017, Pages 747–757
https://doi.org/10.1093/ijnp/pyx047

Abstract

Background
Impaired empathic abilities lead to severe negative social consequences and influence the development and treatment of several psychiatric disorders. Furthermore, empathy has been shown to play a crucial role in moral and prosocial behavior. Although the serotonin system has been implicated in modulating empathy and moral behavior, the relative contribution of the various serotonin receptor subtypes is still unknown.

Methods
We investigated the acute effect of psilocybin (0.215 mg/kg p.o.) in healthy human subjects on different facets of empathy and hypothetical moral decision-making using the multifaceted empathy test (n=32) and the moral dilemma task (n=24).

Results
Psilocybin significantly increased emotional, but not cognitive empathy compared with placebo, and the increase in implicit emotional empathy was significantly associated with psilocybin-induced changed meaning of percepts. In contrast, moral decision-making remained unaffected by psilocybin.

Conclusions
These findings provide first evidence that psilocybin has distinct effects on social cognition by enhancing emotional empathy but not moral behavior. Furthermore, together with previous findings, psilocybin appears to promote emotional empathy presumably via activation of serotonin 2A/1A receptors, suggesting that targeting serotonin 2A/1A receptors has implications for potential treatment of dysfunctional social cognition.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Monday, December 3, 2018

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Thursday, May 3, 2018

We can train AI to identify good and evil, and then use it to teach us morality

Ambarish Mitra
Quartz.com
Originally published April 5, 2018

Here is an excerpt:

To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.

If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.

While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues.

The information is here.

Monday, March 19, 2018

‘The New Paradigm,’ Conscience and the Death of Catholic Morality

E. Christian Brugger
National Catholic Register
Originally published February 23, 2-18

Vatican Secretary of State Cardinal Pietro Parolin, in a recent interview with Vatican News, contends the controversial reasoning expressed in the apostolic exhortation Amoris Laetitia (The Joy of Love) represents a “paradigm shift” in the Church’s reasoning, a “new approach,” arising from a “new spirit,” which the Church needs to carry out “the process of applying the directives of Amoris Laetitia.”

His reference to a “new paradigm” is murky. But its meaning is not. Among other things, he is referring to a new account of conscience that exalts the subjectivity of the process of decision-making to a degree that relativizes the objectivity of the moral law. To understand this account, we might first look at a favored maxim of Pope Francis: “Reality is greater than ideas.”

It admits no single-dimensional interpretation, which is no doubt why it’s attractive to the “Pope of Paradoxes.” But in one area, the arena of doctrine and praxis, a clear meaning has emerged. Dogma and doctrine constitute ideas, while praxis (i.e., the concrete lived experience of people) is reality: “Ideas — conceptual elaborations — are at the service of … praxis” (Evangelii Gaudium, 232).

In relation to the controversy stirred by Amoris Laetitia, “ideas” is interpreted to mean Church doctrine on thorny moral issues such as, but not only, Communion for the divorced and civilly remarried, and “reality” is interpreted to mean the concrete circumstances and decision-making of ordinary Catholics.

The article is here.

Tuesday, March 6, 2018

Toward a Psychology of Moral Expansiveness

Daniel Crimston, Matthew J. Hornsey, Paul G. Bain, Brock Bastian
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 14 - 19

Abstract

Theorists have long noted that people’s moral circles have expanded over the course of history, with modern people extending moral concern to entities—both human and nonhuman—that our ancestors would never have considered including within their moral boundaries. In recent decades, researchers have sought a comprehensive understanding of the psychology of moral expansiveness. We first review the history of conceptual and methodological approaches in understanding our moral boundaries, with a particular focus on the recently developed Moral Expansiveness Scale. We then explore individual differences in moral expansiveness, attributes of entities that predict their inclusion in moral circles, and cognitive and motivational factors that help explain what we include within our moral boundaries and why they may shrink or expand. Throughout, we highlight the consequences of these psychological effects for real-world ethical decision making.

The article is here.

Monday, February 19, 2018

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

Here is an excerpt:

Rather than being a straightforward matter of doing the right thing, medical students’ understandings of morally correct behavior differ from one individual to another. This is partly because moral judgments frequently concern decisions about behaviors that might entail some form of harm to another, and different individuals hold different perspectives about moral trade-offs (i.e., how to decide between two courses of action when the consequences of both have morally undesirable effects). It is partly because the majority of human behavior arises within a person-situation interaction. Indeed, moral “flexibility” suggests that though we are motivated to do the right thing, any moral principle can bring forth a variety of context-dependent moral judgments and decisions. Moral rules and principles are abstract ideas—rather than facts—and these ideas need to be operationalized and applied to specific situations. Each situation will have different affordances highlighting one facet or another of any given moral value. Thus, when faced with morally dubious situations—such as being asked to participate in lapses of patient consent by senior clinicians during workplace learning events—medical students’ subsequent actions (compliance or resistance) differ.

The article is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Quillette
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.

Monday, January 29, 2018

Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others

Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017

The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.

The article is here.

Monday, January 15, 2018

Lesion network localization of criminal behavior

R. Ryan Darby Andreas Horn, Fiery Cushman, and Michael D. Fox
The Proceedings of the National Academy of Sciences

Abstract

Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases. The lesion sites were spatially heterogeneous, including the medial prefrontal cortex, orbitofrontal cortex, and different locations within the bilateral temporal lobes. No single brain region was damaged in all cases. Because lesion-induced symptoms can come from sites connected to the lesion location and not just the lesion location itself, we also identified brain regions functionally connected to each lesion location. This technique, termed lesion network mapping, has recently identified regions involved in symptom generation across a variety of lesion-induced disorders. All lesions were functionally connected to the same network of brain regions. This criminality-associated connectivity pattern was unique compared with lesions causing four other neuropsychiatric syndromes. This network includes regions involved in morality, value-based decision making, and theory of mind, but not regions involved in cognitive control or empathy. Finally, we replicated our results in a separate cohort of 23 cases in which a temporal relationship between brain lesions and criminal behavior was implied but not definitive. Our results suggest that lesions in criminals occur in different brain locations but localize to a unique resting state network, providing insight into the neurobiology of criminal behavior.

Significance

Cases like that of Charles Whitman, who murdered 16 people after growth of a brain tumor, have sparked debate about why some brain lesions, but not others, might lead to criminal behavior. Here we systematically characterize such lesions and compare them with lesions that cause other symptoms. We find that lesions in multiple different brain areas are associated with criminal behavior. However, these lesions all fall within a unique functionally connected brain network involved in moral decision making. Furthermore, connectivity to competing brain networks predicts the abnormal moral decisions observed in these patients. These results provide insight into why some brain lesions, but not others, might predispose to criminal behavior, with potential neuroscience, medical, and legal implications.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.

Abstract

Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

Thursday, September 7, 2017

Harm to self outweighs benefit to others in moral decision making

Lukas J. Volz, B. Locke Welborn, Matthias S. Gobel, Michael S. Gazzaniga, and Scott T. Grafton
PNAS 2017 ; published ahead of print July 10, 2017

Abstract

How we make decisions that have direct consequences for ourselves and others forms the moral foundation of our society. Whereas economic theory contends that humans aim at maximizing their own gains, recent seminal psychological work suggests that our behavior is instead hyperaltruistic: We are more willing to sacrifice gains to spare others from harm than to spare ourselves from harm. To investigate how such egoistic and hyperaltruistic tendencies influence moral decision making, we investigated trade-off decisions combining monetary rewards and painful electric shocks, administered to the participants themselves or an anonymous other. Whereas we replicated the notion of hyperaltruism (i.e., the willingness to forego reward to spare others from harm), we observed strongly egoistic tendencies in participants’ unwillingness to harm themselves for others’ benefit. The moral principle guiding intersubject trade-off decision making observed in our study is best described as egoistically biased altruism, with important implications for our understanding of economic and social interactions in our society.

Significance

Principles guiding decisions that affect both ourselves and others are of prominent importance for human societies. Previous accounts in economics and psychological science have often described decision making as either categorically egoistic or altruistic. Instead, the present work shows that genuine altruism is embedded in context-specific egoistic bias. Participants were willing to both forgo monetary reward to spare the other from painful electric shocks and also to suffer painful electric shocks to secure monetary reward for the other. However, across all trials and conditions, participants accrued more reward and less harm for the self than for the other person. These results characterize human decision makers as egoistically biased altruists, with important implications for psychology, economics, and public policy.

The article is here.

Saturday, August 5, 2017

Empathy makes us immoral

Olivia Goldhill
Quartz
Originally published July 9, 2017

Empathy, in general, has an excellent reputation. But it leads us to make terrible decisions, according to Paul Bloom, psychology professor at Yale and author of Against Empathy: The Case for Rational Compassion. In fact, he argues, we would be far more moral if we had no empathy at all.

Though it sounds counterintuitive, Bloom makes a convincing case. First, he makes a point of defining empathy as putting yourself in the shoes of other people—“feeling their pain, seeing the world through their eyes.” When we rely on empathy to make moral decisions, he says, we end up prioritizing the person whose suffering we can easily relate to over that of any number of others who seem more distant. Indeed, studies have shown that empathy does encourage irrational moral decisions that favor one individual over the masses.

“When we rely on empathy, we think that a little girl stuck down a well is more important than all of climate change, is more important than tens of thousands of people dying in a far away country,” says Bloom. “Empathy zooms us in on the attractive, on the young, on people of the same race. It zooms us in on the one rather than the many. And so it distorts our priorities.”

The article is here.