Shannon Vallor
Noema Magazine
Originally posted 23 May 24
Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.
Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.
Here are some thoughts:
This essay critiques the prevalent notion of superhuman AI, arguing that this rhetoric diminishes the unique qualities of human intelligence. The author challenges the idea that surpassing humans in task completion equates to superior intelligence, emphasizing the irreplaceable aspects of human consciousness, emotion, and creativity. The essay contrasts the narrow definition of intelligence used by some AI researchers with a broader understanding that encompasses human experience and values. Ultimately, the author proposes a future where AI complements rather than replaces human capabilities, fostering a more humane and sustainable society.