Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Decision-making. Show all posts
Showing posts with label Moral Decision-making. Show all posts

Monday, July 21, 2025

Emotion and deliberative reasoning in moral judgment.

Cummins, D. D., & Cummins, R. C. (2012).
Frontiers in psychology, 3, 328.

Abstract

According to an influential dual-process model, a moral judgment is the outcome of a rapid, affect-laden process and a slower, deliberative process. If these outputs conflict, decision time is increased in order to resolve the conflict. Violations of deontological principles proscribing the use of personal force to inflict intentional harm are presumed to elicit negative affect which biases judgments early in the decision-making process. This model was tested in three experiments. Moral dilemmas were classified using (a) decision time and consensus as measures of system conflict and (b) the aforementioned deontological criteria. In Experiment 1, decision time was either unlimited or reduced. The dilemmas asked whether it was appropriate to take a morally questionable action to produce a “greater good” outcome. Limiting decision time reduced the proportion of utilitarian (“yes”) decisions, but contrary to the model’s predictions, (a) vignettes that involved more deontological violations logged faster decision times, and (b) violation of deontological principles was not predictive of decisional conflict profiles. Experiment 2 ruled out the possibility that time pressure simply makes people more like to say “no.” Participants made a first decision under time constraints and a second decision under no time constraints. One group was asked whether it was appropriate to take the morally questionable action while a second group was asked whether it was appropriate to refuse to take the action. The results replicated that of Experiment 1 regardless of whether “yes” or “no” constituted a utilitarian decision. In Experiment 3, participants rated the pleasantness of positive visual stimuli prior to making a decision. Contrary to the model’s predictions, the number of deontological decisions increased in the positive affect rating group compared to a group that engaged in a cognitive task or a control group that engaged in neither task. These results are consistent with the view that early moral judgments are influenced by affect. But they are inconsistent with the view that (a) violation of deontological principles are predictive of differences in early, affect-based judgment or that (b) engaging in tasks that are inconsistent with the negative emotional responses elicited by such violations diminishes their impact.

Here are some thoughts:

This research investigates the role of emotion and cognitive processes in moral decision-making, testing a dual-process model that posits moral judgments arise from a conflict between rapid, affect-driven (System 1) and slower, deliberative (System 2) processes. Across three experiments, participants were presented with moral dilemmas involving utilitarian outcomes (sacrificing few to save many) and deontological violations (using personal force to intentionally harm), with decision times manipulated to assess how these factors influence judgment. The findings challenge the assumption that deontological decisions are always driven by fast emotional responses: while limiting decision time generally reduced utilitarian judgments, exposure to pleasant emotional stimuli unexpectedly increased deontological responses, suggesting that emotional context, not just negative affect from deontological violations, plays a significant role. Additionally, decisional conflict—marked by low consensus and long decision times—was not fully predicted by deontological criteria, indicating other factors influence moral judgment. Overall, the study supports a dual-process framework but highlights the complexity of emotion's role, showing that both utilitarian and deontological judgments can be influenced by affective states and intuitive heuristics rather than purely deliberative reasoning.

Thursday, June 5, 2025

How peer influence shapes value computation in moral decision-making

Yu, H., Siegel, J. et al. (2021).
Cognition, 211, 104641.

Abstract

Moral behavior is susceptible to peer influence. How does information from peers influence moral preferences? We used drift-diffusion modeling to show that peer influence changes the value of moral behavior by prioritizing the choice attributes that align with peers' goals. Study 1 (N = 100; preregistered) showed that participants accurately inferred the goals of prosocial and antisocial peers when observing their moral decisions. In Study 2 (N = 68), participants made moral decisions before and after observing the decisions of a prosocial or antisocial peer. Peer observation caused participants' own preferences to resemble those of their peers. This peer influence effect on value computation manifested as an increased weight on choice attributes promoting the peers' goals that occurred independently from peer influence on initial choice bias. Participants' self-reported awareness of influence tracked more closely with computational measures of prosocial than antisocial influence. Our findings have implications for bolstering and blocking the effects of prosocial and antisocial influence on moral behavior.

Here are some thoughts:

Peer influence plays a significant role in shaping how people make moral decisions. Rather than simply copying others, individuals tend to adjust the way they value different aspects of a moral choice to align with the goals and preferences of their peers. This means that observing others’ moral behavior-whether prosocial or antisocial-can shift the importance people place on certain outcomes, such as helping others or personal gain, during their own decision-making process. Computational models, like the drift diffusion model, show that these changes occur at the level of value computation, not just as a surface-level bias. Interestingly, people are generally more aware of being influenced by positive (prosocial) peers than by negative (antisocial) ones. Overall, the findings highlight that social context can subtly and powerfully shape moral values and behavior.

Sunday, April 20, 2025

Confidence in Moral Decision-Making

Schooler, L.,  et al. (2024).
Collabra Psychology, 10(1).

Abstract

Moral decision-making typically involves trade-offs between moral values and self-interest. While previous research on the psychological mechanisms underlying moral decision-making has primarily focused on what people choose, less is known about how an individual consciously evaluates the choices they make. This sense of having made the right decision is known as subjective confidence. We investigated how subjective confidence is constructed across two moral contexts. In Study 1 (240 U.S. participants from Amazon Mechanical Turk, 81 female), participants made hypothetical decisions between choices with monetary profits for themselves and physical harm for either themselves or another person. In Study 2 (369 U.S. participants from Prolific, 176 female), participants made incentive-compatible decisions between choices with monetary profits for themselves and monetary harm for either themselves or another person. In both studies, each choice was followed by a subjective confidence rating. We used a computational model to obtain a trial-by-trial measure of participant-specific subjective value in decision-making and related this to subjective confidence ratings. Across all types of decisions, confidence was positively associated with the absolute difference in subjective value between the two options. Specific to the moral decision-making context, choices that are typically seen as more blameworthy – i.e., causing more harm to an innocent person to benefit oneself – suppressed the effects of increasing profit on confidence, while amplifying the dampening effect of harm on confidence. These results illustrate some potential cognitive mechanisms underlying subjective confidence in moral decision-making and highlighted both shared and distinct cognitive features relative to non-moral value-based decision-making.

Here are some thoughts:

The article explores how individuals form a sense of confidence in their moral choices, particularly in situations involving trade-offs between personal gain and causing harm. Rather than focusing solely on what people choose, the research delves into how confident people feel about the decisions they make—what is known as subjective confidence. Importantly, this confidence is not only influenced by the perceived value of the options but also by the moral implications of the choice itself. When people make decisions that benefit themselves at the expense of others, particularly when the action is considered morally blameworthy, their sense of confidence tends to decrease. Conversely, decisions that are morally neutral or praiseworthy are associated with greater subjective certainty. In this way, the moral weight of a decision appears to shape how individuals internally evaluate the quality of their choices.

For mental health professionals, these findings carry significant implications. Understanding how confidence is constructed in the context of moral decision-making can deepen insight into clients’ struggles with guilt, shame, indecision, and moral injury. Often, clients question not just what they did, but whether they made the "right" decision—morally and personally. This research highlights that moral self-evaluation is complex and sensitive to both the outcomes and the perceived ethical nature of one’s actions. It also suggests that people are more confident in decisions that affect themselves than those that impact others, which may help explain patterns of self-doubt or moral rumination in therapy. Additionally, for clinicians themselves—who frequently navigate ethically ambiguous situations—recognizing how subjective confidence is shaped by moral context can support reflective practice, supervision, and ethical decision-making. Ultimately, this research adds depth to our understanding of how people process and live with the choices they make, and how these internal evaluations may guide future behavior and psychological well-being.

Sunday, March 16, 2025

Computational Approaches to Morality

Bello, P., & Malle, B. F. (2023).
In R. Sun (Ed.), Cambridge Handbook 
of Computational Cognitive Sciences
(pp. 1037-1063). Cambridge University Press.

Introduction

Morality regulates individual behavior so that it complies with community interests (Curry et al., 2019; Haidt, 2001; Hechter & Opp, 2001). Humans achieve this regulation by motivating and deterring certain behaviors through the imposition of norms – instructions of how one should or should not act in a particular context (Fehr & Fischbacher, 2004; Sripada & Stich, 2006) – and, if a norm is violated, by levying sanctions (Alexander, 1987; Bicchieri, 2006). This chapter examines the mental and behavioral processes that facilitate human living in moral communities and how these processes might be represented computationally and ultimately engineered in embodied agents.

Computational work on morality arises from two major sources. One is empirical moral science, which accumulates knowledge about a variety of phenomena of human morality, such as moral decision making, judgment, and emotions. Resulting computational work tries to model and explain these human phenomena. The second source is philosophical ethics, which has for millennia discussed moral principles by which humans should live. Resulting computational work is often labeled machine ethics, which is the attempt to create artificial agents with moral capacities reflecting one or more of the ethical theories. A brief discussion of these two sources will ground the subsequent discussion of computational morality.


Here are some thoughts:

This chapter examines computational approaches to morality, driven by two goals: modeling human moral cognition and creating artificial moral agents ("machine ethics"). It maps key moral phenomena – behavior, judgments, emotions, sanctions, and communication – arguing these are shaped by social norms rather than innate brain circuits. Norms are community instructions specifying acceptable/unacceptable behavior. The chapter explores philosophical ethics: deontology (duty-based ethics, exemplified by Kant, Rawls, Ross) and consequentialism (outcome-based ethics, particularly utilitarianism). It addresses computational challenges like scaling, conflicting preferences, and framing moral problems. Finally, it surveys rule-based approaches, case-based reasoning, reinforcement learning, and cognitive science perspectives in modeling moral decision-making.

Saturday, August 3, 2024

Moral agents as relational systems: The Contract-Based Model of Moral Cognition for AI

Vidal, L. M., Marchesi, S., Wykowska, A., & Pretus, C.
(2024, July 3)

Abstract

As artificial systems are becoming more prevalent in our daily lives, we should ensure that they make decisions that are aligned with human values. Utilitarian algorithms, which aim to maximize benefits and minimize harm fall short when it comes to human autonomy and fairness since it is insensitive to other-centered human preferences or how the burdens and benefits are distributed, as long as the majority benefits. We propose a Contract-Based model of moral cognition that regards artificial systems as relational systems that are subject to a social contract. To articulate this social contract, we draw from contractualism, an impartial ethical framework that evaluates the appropriateness of behaviors based on whether they can be justified to others. In its current form, the Contract-based model characterizes artificial systems as moral agents bound to obligations towards humans. Specifically, this model allows artificial systems to make moral evaluations by estimating the relevance each affected individual assigns to the norms transgressed by an action. It can also learn from human feedback, which is used to generate new norms and update the relevance of different norms in different social groups and types of relationships. The model’s ability to justify their choices to humans, together with the central role of human feedback in moral evaluation and learning, makes this model suitable for supporting human autonomy and fairness in human-to-robot interactions. As human relationships with artificial agents evolve, the Contract-Based model could also incorporate new terms in the social contract between humans and machines, including terms that confer artificial agents a status as moral patients.


Here are some thoughts:

The article proposes a Contract-Based model of moral cognition for artificial intelligence (AI) systems, drawing from the ethical framework of contractualism, which evaluates actions based on their justifiability to others. This model views AI systems as relational entities bound by a social contract with humans, allowing them to make moral evaluations by estimating the relevance of norms to affected individuals and learning from human feedback to generate and update these norms. The model is designed to support human autonomy and fairness in human-robot interactions and can also function as moral enhancers to assist humans in moral decision-making in human-human interactions. However, the use of moral enhancers raises ethical concerns about autonomy, responsibility, and potential unintended consequences. Additionally, the article suggests that as human relationships with AI evolve, the model could incorporate new terms in the social contract, potentially recognizing AI systems as moral patients. This forward-thinking approach anticipates the complex ethical questions that may arise as AI becomes more integrated into daily life.

Saturday, November 11, 2023

Discordant benevolence: How and why people help others in the face of conflicting values.

Cowan, S. K., Bruce, T. C., et al. (2022).
Science Advances, 8(7).

Abstract

What happens when a request for help from friends or family members invokes conflicting values? In answering this question, we integrate and extend two literatures: support provision within social networks and moral decision-making. We examine the willingness of Americans who deem abortion immoral to help a close friend or family member seeking one. Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, we find that a substantial minority of Americans morally opposed to abortion would enact what we call discordant benevolence: providing help when doing so conflicts with personal values. People negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion. This endeavor reveals both how personal values affect social support processes and how the nature of interaction shapes outcomes of moral decision-making.

Here is my summary:

Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, the authors find that a substantial minority of Americans morally opposed to abortion would enact discordant benevolence. They also find that people negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion.

Commiseration involves understanding and sharing the other person's perspective, even if one does not agree with it. Exemption involves excusing oneself from helping, perhaps by claiming ignorance or lack of resources. Discretion involves helping in a way that minimizes the conflict with one's own values, such as by providing emotional support or practical assistance but not financial assistance.

The authors argue that discordant benevolence is a complex phenomenon that reflects the interplay of personal values, social relationships, and moral decision-making. They conclude that discordant benevolence is a significant form of social support, even in cases where it is motivated by conflicting values.

In other words, the research suggests that people are willing to help others in need, even if it means violating their own personal values. This is because people also value social relationships and helping others. They may do this by discriminating among types of help or by exercising commiseration, exemption, or discretion.

Thursday, January 27, 2022

Many heads are more utilitarian than one

Keshmirian, A., Deroy, O, & Bahrami, B.
Cognition
Volume 220, March 2022, 104965

Abstract

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions. To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character's action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not. In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. In Experiment 2, we tested this hypothesis more directly: measuring participants' state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction. The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

Highlights

• Collective consensual judgments made via group interactions were more utilitarian than individual judgments.

• Group discussion did not change the individual judgments indicating a normative conformity effect.

• Individuals consented to a group judgment that they did not necessarily buy into personally.

• Collectives were less stressed than individuals after responding to moral dilemmas.

• Interactions reduced aversive emotions (e.g., stressed) associated with violation of moral norms.

From the Discussion

Our analysis revealed that groups, in comparison to individuals, are more utilitarian in their moral judgments. Thus, our findings are inconsistent with Virtue-Signaling (VS), which proposed the opposite
effect. Crucially, the collective utilitarian boost was short-lived: it was only seen at the collective level and not when participants rated the same questions individually again. Previous research shows that moral change at the individual level, as the result of social deliberation, is rather long-lived and not transient (e.g., see Ueshima et al., 2021). Thus, this collective utilitarian boost could not have resulted from deliberation and reasoning or due to conscious application of utilitarian principles with authentic reasons to maximize the total good. If this was the case, the effect would have persisted in the second individual judgment as well. That was not what we observed. Consequently, our findings are inconsistent with the Social Deliberation (SD) hypotheses.

Sunday, February 28, 2021

How peer influence shapes value computation in moral decision-making

Yu, H., Siegel, J., Clithero, J., & Crockett, M. 
(2021, January 16).

Abstract

Moral behavior is susceptible to peer influence. How does information from peers influence moral preferences? We used drift-diffusion modeling to show that peer influence changes the value of moral behavior by prioritizing the choice attributes that align with peers’ goals. Study 1 (N = 100; preregistered) showed that participants accurately inferred the goals of prosocial and antisocial peers when observing their moral decisions. In Study 2 (N = 68), participants made moral decisions before and after observing the decisions of a prosocial or antisocial peer. Peer observation caused participants’ own preferences to resemble those of their peers. This peer influence effect on value computation manifested as an increased weight on choice attributes promoting the peers’ goals that occurred independently from peer influence on initial choice bias. Participants’ self-reported awareness of influence tracked more closely with computational measures of prosocial than antisocial influence. Our findings have implications for bolstering and blocking the effects of prosocial and antisocial influence on moral behavior.

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 

Abstract

People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Sunday, November 22, 2020

The logic of universalization guides moral judgment

Levine, S., et al.
PNAS October 20, 2020 
117 (42) 26158-26169; 
first published October 2, 2020; 

Abstract

To explain why an action is wrong, we sometimes say, “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that, alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

Significance

Humans have several different ways to decide whether an action is wrong: We might ask whether it causes harm or whether it breaks a rule. Moral psychology attempts to understand the mechanisms that underlie moral judgments. Inspired by theories of “universalization” in moral philosophy, we describe a mechanism that is complementary to existing approaches, demonstrate it in both adults and children, and formalize a precise account of its cognitive mechanisms. Specifically, we show that, when making judgments in novel circumstances, people adopt moral rules that would lead to better consequences if (hypothetically) universalized. Universalization may play a key role in allowing people to construct new moral rules when confronting social dilemmas such as voting and environmental stewardship.

Sunday, March 22, 2020

Our moral instincts don’t match this crisis

Yascha Mounk
The Atlantic
Originally posted March 19, 2020

Here is an excerpt:

There are at least three straightforward explanations.

The first has to do with simple ignorance. For those of us who have spent the past weeks obsessing about every last headline regarding the evolution of the crisis, it can be easy to forget that many of our fellow citizens simply don’t follow the news with the same regularity—or that they tune into radio shows and television networks that have, shamefully, been downplaying the extent of the public-health emergency. People crowding into restaurants or hanging out in big groups, then, may simply fail to realize the severity of the pandemic. Their sin is honest ignorance.

The second explanation has to do with selfishness. Going out for trivial reasons imposes a real risk on those who will likely die if they contract the disease. Though the coronavirus does kill some young people, preliminary data from China and Italy suggest that they are, on average, less strongly affected by it. For those who are far more likely to survive, it is—from a purely selfish perspective—less obviously irrational to chance such social encounters.

The third explanation has to do with the human tendency to make sacrifices for the suffering that is right in front of our eyes, but not the suffering that is distant or difficult to see.

The philosopher Peter Singer presented a simple thought experiment in a famous paper. If you went for a walk in a park, and saw a little girl drowning in a pond, you would likely feel that you should help her, even if you might ruin your fancy shirt. Most people recognize a moral obligation to help another at relatively little cost to themselves.

Then Singer imagined a different scenario. What if a girl was in mortal danger halfway across the world, and you could save her by donating the same amount of money it would take to buy that fancy shirt? The moral obligation to help, he argued, would be the same: The life of the distant girl is just as important, and the cost to you just as small. And yet, most people would not feel the same obligation to intervene.

The same might apply in the time of COVID-19. Those refusing to stay home may not know the victims of their actions, even if they are geographically proximate, and might never find out about the terrible consequences of what they did. Distance makes them unjustifiably callous.

The info is here.

Tuesday, November 12, 2019

Effect of Psilocybin on Empathy and Moral Decision-Making

Thomas Pokorny, Katrin H Preller, & others
International Journal of Neuropsychopharmacology, 
Volume 20, Issue 9, September 2017, Pages 747–757
https://doi.org/10.1093/ijnp/pyx047

Abstract

Background
Impaired empathic abilities lead to severe negative social consequences and influence the development and treatment of several psychiatric disorders. Furthermore, empathy has been shown to play a crucial role in moral and prosocial behavior. Although the serotonin system has been implicated in modulating empathy and moral behavior, the relative contribution of the various serotonin receptor subtypes is still unknown.

Methods
We investigated the acute effect of psilocybin (0.215 mg/kg p.o.) in healthy human subjects on different facets of empathy and hypothetical moral decision-making using the multifaceted empathy test (n=32) and the moral dilemma task (n=24).

Results
Psilocybin significantly increased emotional, but not cognitive empathy compared with placebo, and the increase in implicit emotional empathy was significantly associated with psilocybin-induced changed meaning of percepts. In contrast, moral decision-making remained unaffected by psilocybin.

Conclusions
These findings provide first evidence that psilocybin has distinct effects on social cognition by enhancing emotional empathy but not moral behavior. Furthermore, together with previous findings, psilocybin appears to promote emotional empathy presumably via activation of serotonin 2A/1A receptors, suggesting that targeting serotonin 2A/1A receptors has implications for potential treatment of dysfunctional social cognition.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Monday, December 3, 2018

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Thursday, May 3, 2018

We can train AI to identify good and evil, and then use it to teach us morality

Ambarish Mitra
Quartz.com
Originally published April 5, 2018

Here is an excerpt:

To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.

If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.

While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues.

The information is here.

Monday, March 19, 2018

‘The New Paradigm,’ Conscience and the Death of Catholic Morality

E. Christian Brugger
National Catholic Register
Originally published February 23, 2-18

Vatican Secretary of State Cardinal Pietro Parolin, in a recent interview with Vatican News, contends the controversial reasoning expressed in the apostolic exhortation Amoris Laetitia (The Joy of Love) represents a “paradigm shift” in the Church’s reasoning, a “new approach,” arising from a “new spirit,” which the Church needs to carry out “the process of applying the directives of Amoris Laetitia.”

His reference to a “new paradigm” is murky. But its meaning is not. Among other things, he is referring to a new account of conscience that exalts the subjectivity of the process of decision-making to a degree that relativizes the objectivity of the moral law. To understand this account, we might first look at a favored maxim of Pope Francis: “Reality is greater than ideas.”

It admits no single-dimensional interpretation, which is no doubt why it’s attractive to the “Pope of Paradoxes.” But in one area, the arena of doctrine and praxis, a clear meaning has emerged. Dogma and doctrine constitute ideas, while praxis (i.e., the concrete lived experience of people) is reality: “Ideas — conceptual elaborations — are at the service of … praxis” (Evangelii Gaudium, 232).

In relation to the controversy stirred by Amoris Laetitia, “ideas” is interpreted to mean Church doctrine on thorny moral issues such as, but not only, Communion for the divorced and civilly remarried, and “reality” is interpreted to mean the concrete circumstances and decision-making of ordinary Catholics.

The article is here.

Tuesday, March 6, 2018

Toward a Psychology of Moral Expansiveness

Daniel Crimston, Matthew J. Hornsey, Paul G. Bain, Brock Bastian
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 14 - 19

Abstract

Theorists have long noted that people’s moral circles have expanded over the course of history, with modern people extending moral concern to entities—both human and nonhuman—that our ancestors would never have considered including within their moral boundaries. In recent decades, researchers have sought a comprehensive understanding of the psychology of moral expansiveness. We first review the history of conceptual and methodological approaches in understanding our moral boundaries, with a particular focus on the recently developed Moral Expansiveness Scale. We then explore individual differences in moral expansiveness, attributes of entities that predict their inclusion in moral circles, and cognitive and motivational factors that help explain what we include within our moral boundaries and why they may shrink or expand. Throughout, we highlight the consequences of these psychological effects for real-world ethical decision making.

The article is here.

Monday, February 19, 2018

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

Here is an excerpt:

Rather than being a straightforward matter of doing the right thing, medical students’ understandings of morally correct behavior differ from one individual to another. This is partly because moral judgments frequently concern decisions about behaviors that might entail some form of harm to another, and different individuals hold different perspectives about moral trade-offs (i.e., how to decide between two courses of action when the consequences of both have morally undesirable effects). It is partly because the majority of human behavior arises within a person-situation interaction. Indeed, moral “flexibility” suggests that though we are motivated to do the right thing, any moral principle can bring forth a variety of context-dependent moral judgments and decisions. Moral rules and principles are abstract ideas—rather than facts—and these ideas need to be operationalized and applied to specific situations. Each situation will have different affordances highlighting one facet or another of any given moral value. Thus, when faced with morally dubious situations—such as being asked to participate in lapses of patient consent by senior clinicians during workplace learning events—medical students’ subsequent actions (compliance or resistance) differ.

The article is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Quillette
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.