Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, August 20, 2024

What would qualify an artificial intelligence for moral standing?

Ladak, A.
AI Ethics 4, 213–228 (2024).

Abstract

What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.

Here are some thoughts:

This article explores the criteria for artificial intelligence (AI) to be considered morally significant. While sentience is often seen as a requirement, the author argues that some non-sentient AIs might qualify too. The paper examines different viewpoints and proposes that AIs with complex goals or consciousness, even without sentience, could be morally relevant. This perspective suggests the issue might be broader and more pressing than previously thought. It also highlights the need for AI policies to consider a wider range of AI and for those skeptical of AI sentience to still acknowledge the moral questions it raises. Further research is needed due to the remaining uncertainties.