Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 28, 2025

Moral reasoning in a digital age: blaming artificial intelligence for incorrect high-risk decisions

Leichtmann, B., et al. (2024).
Current Psychology.

Abstract

The increasing involvement of Artificial Intelligence (AI) in moral decision situations raises the possibility of users attributing blame to AI-based systems for negative outcomes. In two experimental studies with a total of N=911 participants, we explored the attribution of blame and underlying moral reasoning. Participants had to classify mushrooms in pictures as edible or poisonous with support of an AI-based app. Afterwards, participants read a fictitious scenario in which a misclassification due to an erroneous AI recommendation led to the poisoning of a person. In the first study, increased system transparency through explainable AI techniques reduced blaming of AI. A follow-up study showed that attribution of blame to each actor in the scenario depends on their perceived obligation and capacity to prevent such an event. Thus, blaming AI is indirectly associated with mind attribution and blaming oneself is associated with the capability to recognize a wrong classification. We discuss implications for future research on moral cognition in the context of human–AI interaction.

Here are some thoughts:

This research explores how people assign blame in situations where AI systems make mistakes that lead to harmful outcomes.    

In two experiments with a total of 911 participants, the study examined blame attribution and the underlying moral reasoning involved when AI is used in decision-making.  Participants were asked to use an AI-based app to classify mushrooms in pictures as edible or poisonous.  They then read a scenario where a person was poisoned due to a misclassification by the AI.    

The study's key findings include:
  • In the first study, providing explanations for the AI's decisions (using explainable AI techniques) reduced the amount of blame attributed to the AI.    
  • The second study showed that blame attribution depends on the perceived obligation and capacity of those involved (AI, user, etc.) to prevent the harmful event.    
  • Blaming AI is linked to the degree to which the AI is perceived as having a mind of its own, while blaming oneself is associated with the individual's capability to recognize the AI's errors. 

This research is important for psychologists for several reasons:
  • It provides insights into how people perceive AI as a moral agent and how they incorporate AI into their moral decision-making processes.
  • The findings highlight the complexities of blame attribution in human-AI interaction, which is crucial for understanding responsibility, accountability, and trust in AI systems.
  • Understanding the factors that influence blame attribution, such as perceived agency, mind attribution, and the availability of explanations, can inform the design of AI systems that promote trust and appropriate accountability.    

The research also has implications for legal and ethical considerations surrounding AI, particularly in cases where AI systems are involved in accidents or errors that cause harm.