Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Omission Bias. Show all posts
Showing posts with label Omission Bias. Show all posts

Saturday, August 9, 2025

Large language models show amplified cognitive biases in moral decision-making

Cheung, V., Maier, M., & Lieder, F. (2025).
PNAS, 122(25).

Abstract

As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise on moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important to understand how well LLMs make moral decisions and how they compare to humans. We investigated these questions by asking a range of LLMs to emulate or advise on people’s decisions in realistic moral dilemmas. In Study 1, we compared LLM responses to those of a representative U.S. sample (N = 285) for 22 dilemmas, including both collective action problems that pitted self-interest against the greater good, and moral dilemmas that pitted utilitarian cost–benefit reasoning against deontological rules. In collective action problems, LLMs were more altruistic than participants. In moral dilemmas, LLMs exhibited stronger omission bias than participants: They usually endorsed inaction over action. In Study 2 (N = 474, preregistered), we replicated this omission bias and documented an additional bias: Unlike humans, most LLMs were biased toward answering “no” in moral dilemmas, thus flipping their decision/advice depending on how the question is worded. In Study 3 (N = 491, preregistered), we replicated these biases in LLMs using everyday moral dilemmas adapted from forum posts on Reddit. In Study 4, we investigated the sources of these biases by comparing models with and without fine-tuning, showing that they likely arise from fine-tuning models for chatbot applications. Our findings suggest that uncritical reliance on LLMs’ moral decisions and advice could amplify human biases and introduce potentially problematic biases.

Significance

How will people’s increasing reliance on large language models (LLMs) influence their opinions about important moral and societal decisions? Our experiments demonstrate that the decisions and advice of LLMs are systematically biased against doing anything, and this bias is stronger than in humans. Moreover, we identified a bias in LLMs’ responses that has not been found in people. LLMs tend to answer “no,” thus flipping their decision/advice depending on how the question is worded. We present some evidence that suggests both biases are induced when fine-tuning LLMs for chatbot applications. These findings suggest that the uncritical reliance on LLMs could amplify and proliferate problematic biases in societal decision-making.

Here are some thoughts:

The study investigates how Large Language Models (LLMs) and humans differ in their moral decision-making, particularly focusing on cognitive biases such as omission bias and yes-no framing effects. For psychologists, understanding these biases helps clarify how both humans and artificial systems process dilemmas. This knowledge can inform theories of moral psychology by identifying whether certain biases are unique to human cognition or emerge in artificial systems trained on human data.

Psychologists are increasingly involved in interdisciplinary work related to AI ethics, particularly as it intersects with human behavior and values. The findings demonstrate that LLMs can amplify existing human cognitive biases, which raises concerns about the deployment of AI systems in domains like healthcare, criminal justice, and education where moral reasoning plays a critical role. Psychologists need to understand these dynamics to guide policies that ensure responsible AI development and mitigate risks.

Tuesday, June 8, 2021

Action and inaction in moral judgments and decisions: ‎Meta-analysis of Omission-Bias omission-commission asymmetries

Jamison, J., Yay, T., & Feldman, G.
Journal of Experimental Social Psychology
Volume 89, July 2020, 103977

Abstract

Omission bias is the preference for harm caused through omissions over harm caused through commissions. In a pre-registered experiment (N = 313), we successfully replicated an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration of the omission bias, examining generalizability to a between-subject design with extensions examining causality, intent, and regret. Participants in the harm through commission condition(s) rated harm as more immoral and attributed higher responsibility compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = 0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of causality and intent, in that commissions were attributed stronger action-outcome links and higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic findings on the action-effect, with higher regret for inaction over action (d = −0.26 to −0.19). Overall, higher perceived causality and intent were associated with higher attributed immorality and responsibility, and with lower perceived regret.

From the Discussion

Regret: Deviation from the action-effect 

The classic action-effect (Kahneman & Tversky, 1982) findings were that actions leading to a negative outcome are regretted more than inactions leading to the same negative outcomes. We added a regret measure to examine whether the action-effect findings would extend to situations of morality involving intended harmful behavior. Our findings were opposite to the expected action-effect omission-commission asymmetry with participants rating omissions as more regretted than commissions (d = 0.18 to 0.26).  

One explanation for this surprising finding may be an intermingling of the perception of an actors’ regret for their behavior with their regret for the outcome. In typical action-effect scenarios, actors behave in a way that is morally neutral but are faced with an outcome that deviates from expectations, such as losing money over an investment. In this study’s omission bias scenarios, the actors behaved immorally to harm others for personal or interpersonal gain, and then are faced with an outcome that deviates from expectation. We hypothesized that participants would perceive actors as being more regretful for taking action that would immorally harm another person rather than allowing that harm through inaction. Yet it is plausible that participants were focused on the regret that actors would feel for not taking more direct action towards their goal of personal or interpersonal gain.  

Another possible explanation for the regret finding is the side-taking hypothesis (DeScioli, 2016; Descoli & Kurzban, 2013). This states that group members side against a wrongdoer who has performed an action that is perceived morally wrong by also attributing lack of remorse or regret. The negative relationship observed between the positive characteristic of regret and the negative characteristics of immorality, causality, and intentionality is in support of this explanation. Future research may be able to explore the true mechanisms of regret in such scenarios. 

Tuesday, August 15, 2017

Inferences about moral character moderate the impact of consequences on blame and praise

Jenifer Z. Siegel, Molly J.Crockett, and Raymond J. Dolan
Cognition
Volume 167, October 2017, Pages 201-211

Abstract

Moral psychology research has highlighted several factors critical for evaluating the morality of another’s choice, including the detection of norm-violating outcomes, the extent to which an agent caused an outcome, and the extent to which the agent intended good or bad consequences, as inferred from observing their decisions. However, person-centered accounts of moral judgment suggest that a motivation to infer the moral character of others can itself impact on an evaluation of their choices. Building on this person-centered account, we examine whether inferences about agents’ moral character shape the sensitivity of moral judgments to the consequences of agents’ choices, and agents’ role in the causation of those consequences. Participants observed and judged sequences of decisions made by agents who were either bad or good, where each decision entailed a trade-off between personal profit and pain for an anonymous victim. Across trials we manipulated the magnitude of profit and pain resulting from the agent’s decision (consequences), and whether the outcome was caused via action or inaction (causation). Consistent with previous findings, we found that moral judgments were sensitive to consequences and causation. Furthermore, we show that the inferred character of an agent moderated the extent to which people were sensitive to consequences in their moral judgments. Specifically, participants were more sensitive to the magnitude of consequences in judgments of bad agents’ choices relative to good agents’ choices. We discuss and interpret these findings within a theoretical framework that views moral judgment as a dynamic process at the intersection of attention and social cognition.

The article is here.