Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, August 6, 2024

Artificial Morality: Differences in Responses to Moral Choices by Human and Artificial Agents

Armbruster, D., Mandl, S., Zeiler, A., & Strobel, A.
(2024, June 4).

Abstract

A consensus on moral "rights" and "wrongs" is essential for ensuring societal functioning. Moral decision-making has been investigated for decades focusing on human agents. More recently, research has started into how humans evaluate artificial moral agents. With increasing presence of artificial intelligence (AI) in society, this question becomes ever more relevant. We investigated responses from a third-party perspective to moral judgments of human and artificial agents in high-stakes and low-stakes dilemmas. High-stakes dilemmas describe life-or-death scenarios while low-stakes dilemmas do not have lethal albeit nevertheless substantial negative consequences. In two online studies, participants responded to the actions resp. inactions of human and artificial agents in four high-stakes scenarios (N1 = 491) and four low-stakes dilemmas (N2 = 490). In line with previous research, agents received generally more blame in high-stakes scenarios and actions resulted overall in more blame than inactions. While there was no effect of scenario type on trust, agents were more trusted when they did not act. Although humans, on average, were blamed more than artificial agents they were nevertheless also more trusted. The most important predictor for blame and trust was whether participants agreed with the moral choice of an agent and considered the chosen course of action as morally appropriate – regardless of the nature of the agent. Religiosity emerged as further predictor for blaming both human and artificial agents, while trait psychopathy was associated with more blame of and less trust in human agents. Additionally, negative attitudes towards robots predicted blame and trust in artificial agents.


Here are some thoughts:

This study on moral judgments of human and artificial agents in high and low-stakes dilemmas offers valuable insights for in terms of ethics education and ethical decision-making. The research reveals that while there were no overall differences in perceived appropriateness of actions between human and artificial agents, gender differences emerged in high-stakes scenarios. Women were less likely to endorse harmful actions for the greater good when performed by human agents, but more likely to approve such actions when performed by artificial agents. This gender disparity in moral judgments highlights the need to be aware of potential biases in ethical reasoning.

The study also found that blame and trust were affected by dilemma type and decision type, with actions generally resulting in higher blame and reduced trust compared to inactions. This aligns with previous research on omission bias and emphasizes the complexity of moral decision-making. Additionally, the research identified several individual differences that influenced moral judgments, blame attribution, and trust. Factors such as religiosity, psychopathy, and negative attitudes towards robots were found to be predictors of blame and trust in both human and artificial agents. These findings underscore the importance of considering individual differences in ethical decision-making processes and when interpreting clients' moral reasoning.

Furthermore, the study touched on the role of Need for Cognition (NFC) in moral judgments, suggesting that cognitive abilities and motivation may contribute to differences in processing moral problems. This is particularly relevant for clinical psychologists when assessing clients' decision-making processes and designing interventions to improve ethical reasoning. The research also highlighted cultural differences in attitudes towards robots and AI, which is crucial for clinical psychologists working in diverse settings or with multicultural populations. As AI becomes more prevalent in healthcare, including mental health, understanding how people perceive and trust artificial agents in moral decision-making is essential for clinical psychologists considering the implementation of AI-assisted tools in their practice.