Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label humanization. Show all posts
Showing posts with label humanization. Show all posts

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.