Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 22, 2025

Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI

Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022).
Journal of Business Ethics, 178(4), 1027–1041.

Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Here are some thoughts:

If you watched the TV series Westworld on HBO, then this research makes a great deal more sense.

This study investigates how individuals morally behave toward AI agents and self-service machines, specifically examining individuals' moral concerns and behaviors when interacting with technology versus humans in a retail setting. The research demonstrates that moral intention, such as the intention to report an error, is less likely to arise for AI checkout and self-checkout machines compared with human checkout scenarios. Furthermore, the study reveals that moral intention decreases as people perceive the machine to be less humanlike. This decline in morality is attributed to reduced guilt displayed toward these new technologies. Essentially, the non-human nature of the interaction evokes a decreased feeling of guilt, which ultimately leads to diminished moral behavior. These findings provide valuable insights into how technological advancements influence consumer behaviors and offer guidance for businesses and retailers in understanding moral intentions within various shopping environments.

These findings carry several important implications for psychologists. They underscore the nuanced ways in which technology shapes human morality and ethical decision-making. The research suggests that the perceived "humanness" of an entity, whether it's a human or an AI, significantly influences the elicitation of moral behavior. This has implications for understanding social cognition, anthropomorphism, and how individuals form relationships with non-human entities. Additionally, the role of guilt in moral behavior is further emphasized, providing insights into the emotional and cognitive processes that underlie ethical conduct. Finally, these findings can inform the development of interventions or strategies aimed at promoting ethical behavior in technology-mediated interactions, a consideration that is increasingly relevant in a world characterized by the growing prevalence of AI and automation.