Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.