Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Patient. Show all posts
Showing posts with label Patient. Show all posts

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Sunday, November 15, 2015

Morality takes two: Dyadic morality and mind perception.

Gray, Kurt; Wegner, Daniel M.
Mikulincer, Mario (Ed); Shaver, Phillip R. (Ed), (2012). The social psychology of morality: Exploring the causes of good and evil. Herzliya series on personality and social psychology., (pp. 109-127). Washington, DC, US: American Psychological Association

Abstract

We propose that all moral acts are (at least implicitly) dyadic, involving two different people, one as a moral agent and one as a moral patient. The idea that people cleave the moral world into agents and patients is as old as Aristotle (Freeland, 1985), but out of this simple claim—that morality takes two—grows a theory of morality with a host of implications for psychology and the real world. Dyadic morality can help explain, for instance, why victims escape blame, why people believe in God, why people harm saints, why some advocate torture, and why those who do good become more physically powerful. In this chapter, we explore the idea of dyadic morality, its extensions and implications. In particular, we examine the following four tenets of dyadic morality: 1. Morality involves a moral agent helping or harming a moral patient. 2. Morality and mind perception are linked: Agency is tied to moral agents; experience is tied to moral patients. 3. Morality requires a complete dyad: An isolated moral agent creates a moral patient; an isolated moral patient creates a moral agent. 4. Morality requires two different people as agent and patient, which means that people are perceived as either agents or patients, both in moral acts and more generally, a phenomenon called moral typecasting. We first explore the link between mind and morality, then examine dyadic help and harm, then explain how moral dyads complete themselves, and finally consider moral typecasting. Why start first with mind perception? Perceptions of mind are tightly bound to moral judgments, and as we show, the structure of mind perception is split into two complementary parts that correspond to the two parts of morality. Perceptions of mind underlie the most fundamental of moral decisions: who deserves moral rights and who deserves moral responsibility.

A copy of the chapter is here.

Thursday, February 12, 2015

Dimensions of Moral Emotions

By Kurt Gray and Daniel M. Wegner
Emotion Review Vol. 3, No. 3 (July 2011) 258–260

Abstract

Anger, disgust, elevation, sympathy, relief. If the subjective experience of each of these emotions is the same whether elicited by moral or nonmoral events, then what makes moral emotions unique? We suggest that the configuration of moral emotions is special—a configuration given by the underlying structure of morality. Research suggests that people divide the moral world along the two dimensions of valence (help/harm) and moral type (agent/patient). The intersection of these two dimensions gives four moral exemplars—heroes, villains, victims and beneficiaries—each of which elicits unique emotions. For example, victims (harm/patient) elicit sympathy and sadness. Dividing moral emotions into these four quadrants provides predictions about which emotions reinforce, oppose and complement each other.

The entire article is here.