Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).


Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.