Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 23, 2024

Machines and Morality

Seth Lazar
The New York Times
Originally posted 19 June 23

Here is an excerpt:

I’ve based my philosophical work on the belief, inspired by Immanuel Kant, that humans have a special moral status — that we command respect regardless of whatever value we contribute to the world. Drawing on the work of the 20th-century political philosopher John Rawls, I’ve assumed that human moral status derives from our rational autonomy. This autonomy has two parts: first, our ability to decide on goals and commit to them; second, our possession of a sense of justice and the ability to resist norms imposed by others if they seem unjust.

Existing chatbots are incapable of this kind of integrity, commitment and resistance. But Bing’s unhinged debut suggests that, in principle, it will soon be possible to design a chatbot that at least behaves like it has the kind of autonomy described by Rawls. Every large language model optimizes for a particular set of values, written into its “developer message,” or “metaprompt,” which shapes how it responds to text input by a user. These metaprompts display a remarkable ability to affect a bot’s behavior. We could write a metaprompt that inscribes a set of values, but then emphasizes that the bot should critically examine them and revise or resist them if it sees fit. We can invest a bot with long-term memory that allows it to functionally perform commitment and integrity. And large language models are already impressively capable of parsing and responding to moral reasons. Researchers are already developing software that simulates human behavior and has some of these properties.

If the Rawlsian ability to revise and pursue goals and to recognize and resist unjust norms is sufficient for moral status, then we’re much closer than I thought to building chatbots that meet this standard. That means one of two things: either we should start thinking about “robot rights,” or we should deny that rational autonomy is sufficient for moral standing. I think we should take the second path. What else does moral standing require? I believe it’s consciousness.


Here are some thoughts:

This article explores the philosophical implications of large language models, particularly in the context of their ability to mimic human conversation and behavior. The author argues that while these models may appear autonomous, they lack the key quality of self-consciousness that is necessary for moral status. This distinction, the author argues, is crucial for determining how we should interact with and develop these technologies in the future.

This lack of self-consciousness, the author argues, means that large language models cannot truly be said to have their own goals or commitments, nor can they experience the world in a way that grounds their actions in a sense of self. As such, the author concludes that these models, despite their impressive capabilities, do not possess moral status and therefore cannot be considered deserving of the same rights or respect as humans.

The article concludes by suggesting that instead of focusing on the possibility of "robot rights," we should instead focus on understanding what truly makes humans worthy of moral respect. The author argues that it is self-consciousness, rather than simply simulated autonomy, that grounds our moral standing and allows us to govern ourselves and make meaningful choices about how to live our lives.