Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.