Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 5, 2023

Moral Judgments of Human vs. AI Agents in Moral Dilemmas

Zhang, Y., Wu, J., Yu, F., & Xu, L. (2023).
Behavioral Sciences, 13(2), 181. MDPI AG.

Abstract

Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.

From the Discussion

Overall, these findings revealed that, in the trolley dilemma, people are more interested in the difference between humans and AI agents than action versus inaction. Conversely, in the footbridge dilemma, people are more interested in action versus inaction. It may be explained that people made moral judgments driven by different response processes in these two dilemmas—controlled cognitive processes occur often in response to dilemmas such as the trolley dilemma and automatic emotional responses occur often in response to dilemmas such as the footbridge dilemma. Thus, in the trolley dilemma, controlled cognitive processes may drive people’s attention to the agent type and make the judgment that it is inappropriate for AI agents to make moral decisions. In the footbridge dilemma, the action of pushing someone off a footbridge may evoke a stronger negative emotion than the action of operating a switch in the trolley dilemma. Driven by these automatic negative emotional responses, people would focus more on whether the agents did this harmful act, and judged this harmful act less acceptable and more morally wrong.

However, it should be noted that our work presents some limitations and offers several avenues for future research. Firstly, the current study only examined how people make moral judgments about humans and AI agents, but did not investigate the underlying psychological mechanism. Thus, all interpretations of the results are speculations. Future research could further explore the reason why people are reluctant to AI agents making moral decisions in the trolley dilemma, why people apply the same moral norms to humans and AI agents in the footbridge dilemma, and why people show different patterns of moral judgment in the trolley dilemma and the footbridge dilemma. Previous research has provided us with some pointers. For example, interpretability and consistency of behaviors would increase people’s acceptance of AI; increased anthropomorphism of an autonomous agent would mitigate blame for an agent’s involvement in an undesirable outcome. Individual differences including personality, development experiences, and cultural background may also influence people’s attitudes toward AI agents. Second, to exclude the potential influence of individual differences between Experiments 1 and 2, we conducted Experiment 3 with a within-subjects design, participants were asked to read both the two scenarios; however, the processing system activated by the first scenario may influence the participants’ judgment about the subsequent scenario. For example, participants who read the footbridge dilemma first may be interested in whether the character acted or not due to the strong negative emotion, this emotion may drive people to focus on the character’s action in the subsequent trolley dilemma, just like they did in the footbridge dilemma. Future research could consider other method approaches to exclude the effects of individual differences and order effects.