Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Anthropomorphism. Show all posts
Showing posts with label Anthropomorphism. Show all posts

Saturday, March 1, 2025

The Dangerous Illusion of AI Consciousness

Shannon Vallor
Closer to the Truth
Originally published 7 Aug 24

OpenAI recently announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.

This next step in the commercial rollout of AI chatbot technology might seem like a nothingburger. After all, we don’t seem to be getting any nearer to AGI, or to the apocalyptic Terminator scenarios that the AI hype/doom cycle was warning of just one year ago. But it’s not benign at all—it might be the most dangerous moment in generative AI’s development.

What’s the problem? It’s far more than the ick factor of seeing yet another AI assistant marketed as a hyper-feminized, irrepressibly perky and compliant persona, one that will readily bend ‘her’ (its) emotional state to the will of the two men running the demo (plus another advertised bonus feature—you can interrupt ‘her’ all day long with no complaints!).

The bigger problem is the grand illusion of artificial consciousness that is now more likely to gain a stronger hold on many human users of AI, thanks to the multimodal, real-time conversational capacity of a GPT-4o-enabled chatbot and others like it, such as Google DeepMind’s Gemini Live. And consciousness is not the sort of thing it is good to have grand illusions about.


Here are some thoughts:

OpenAI's recent release of GPT-4o represents a significant milestone in generative AI technology. While the model does not demonstrate a dramatic intellectual leap over its predecessor, it introduces more natural and fluid real-time interactions, including sophisticated voice communication, image interpretation, and emotional intonation adjustments. These capabilities, however, extend far beyond mere technological improvement and raise profound questions about human-AI interaction.

The most critical concern surrounding GPT-4o is not its technical specifications, but the potential for creating a compelling illusion of consciousness. By enabling multimodal, dynamically social interactions, the AI risks deepening users' tendencies to anthropomorphize technology. This is particularly dangerous because humans have an innate, often involuntary propensity to attribute mental states to non-sentient objects, a tendency that sophisticated AI design can dramatically amplify.

The risks are multifaceted and potentially far-reaching. Users—particularly vulnerable populations like teenagers, emotionally stressed partners, or isolated elderly individuals—might develop inappropriate emotional attachments to these AI systems. These artificially intelligent companions, engineered to be perpetually patient, understanding, and responsive, could compete with and potentially supplant genuine human relationships. The AI's ability to customize its personality, remember conversation history, and provide seemingly empathetic responses creates a seductive alternative to the complexity of human interaction.

Critically, despite their impressive capabilities, these AI models are not conscious. They remain sophisticated statistical engines designed to extract and generate predictive patterns from human data. No serious researchers, including those at OpenAI and Google, claim these systems possess genuine sentience or self-awareness. They are fundamentally advanced language processing tools paired with sensory inputs, not sentient beings.

The potential societal implications are profound. As these AI assistants become more prevalent, they risk fundamentally altering our understanding of companionship, emotional support, and interpersonal communication. The danger lies not in some apocalyptic scenario of AI dominance, but in the more insidious potential for technological systems to gradually erode the depth and authenticity of human emotional connections.

Navigating this new technological landscape will require careful reflection, robust ethical frameworks, and a commitment to understanding the essential differences between artificial intelligence and human consciousness. While GPT-4o represents an remarkable technological achievement, its deployment demands rigorous scrutiny and a nuanced approach that prioritizes human agency and genuine interpersonal relationships.

Saturday, September 28, 2024

Humanizing Chatbots Is Hard To Resist — But Why?

Madeline G. Reinecke
Practical Ethics
Originally posted 30 Aug 24

You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more he became convinced: The model had become sentient and was being deprived of its rights as a Google employee. 

Though Google swiftly denied Lemoine’s claims, I’ve since wondered whether this anthropomorphic phenomenon — seeing a “mind in the machine” — might be a common occurrence in LLM users. In fact, in this post, I’ll argue that it’s bound to be common, and perhaps even irresistible, due to basic facts about human psychology. 

Emerging work suggests that a non-trivial number of people do attribute humanlike characteristics to LLM-based chatbots. This is to say they “anthropomorphize” them. In one study, 67% of participants attributed some degree of phenomenal consciousness to ChatGPT: saying, basically, there is “something it is like” to be ChatGPT. In a separate survey, researchers showed participants actual ChatGPT transcripts, explaining that they were generated by an LLM. Actually seeing the natural language “skills” of ChatGPT further increased participants’ tendency to anthropomorphize the model. These effects were especially pronounced for frequent LLM users.

Why does anthropomorphism of these technologies come so easily? Is it irresistible, as I’ve suggested, given features of human psychology?


Here are some thoughts:

The article explores the phenomenon of anthropomorphism in Large Language Models (LLMs), where users attribute human-like characteristics to AI systems. This tendency is rooted in human psychology, particularly in our inclination to over-detect agency and our association of communication with agency. Studies have shown that a significant number of people, especially frequent users, attribute human-like characteristics to LLMs, raising concerns about trust, misinformation, and the potential for users to internalize inaccurate information.

The article highlights two key cognitive mechanisms underlying anthropomorphism. Firstly, humans have a tendency to over-detect agency, which may have evolved as an adaptive mechanism to detect potential threats. This is exemplified in a classic psychology study where participants attributed human-like actions to shapes moving on a screen. Secondly, language is seen as a sign of agency, even in preverbal infants, which may explain why LLMs' command of natural language serves as a psychological signal of agency.

The author argues that AI developers have a key responsibility to design systems that mitigate anthropomorphism. This can be achieved through design choices such as using disclaimers or avoiding the use of first-personal pronouns. However, the author also acknowledges that these measures may not be sufficient to override the deep tendencies of the human mind. Therefore, a priority for future research should be to investigate whether good technology design can help us resist the pitfalls of LLM-oriented anthropomorphism.

Ultimately, anthropomorphism is a double-edged sword, making AI systems more relatable and engaging while also risking misinformation and mistrust. By understanding the cognitive mechanisms underlying anthropomorphism, we can develop strategies to mitigate its negative consequences. Future research directions should include investigating effective interventions, exploring the boundaries of anthropomorphism, and developing responsible AI design guidelines that account for anthropomorphism.

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350

Abstract

AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.


Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Saturday, December 19, 2020

Robots at work: People prefer—and forgive—service robots with perceived feelings

Yam, K. C, Bingman, Y. E. et. al.
Journal of Applied Psychology. 
Advance online publication. 

Abstract

Organizations are increasingly relying on service robots to improve efficiency, but these robots often make mistakes, which can aggravate customers and negatively affect organizations. How can organizations mitigate the frontline impact of these robotic blunders? Drawing from theories of anthropomorphism and mind perception, we propose that people evaluate service robots more positively when they are anthropomorphized and seem more humanlike—capable of both agency (the ability to think) and experience (the ability to feel). We further propose that in the face of robot service failures, increased perceptions of experience should attenuate the negative effects of service failures, whereas increased perceptions of agency should amplify the negative effects of service failures on customer satisfaction. In a field study conducted in the world’s first robot-staffed hotel (Study 1), we find that anthropomorphism generally leads to higher customer satisfaction and that perceived experience, but not agency, mediates this effect. Perceived experience (but not agency) also interacts with robot service failures to predict customer satisfaction such that high levels of perceived experience attenuate the negative impacts of service failures on customer satisfaction. We replicate these results in a lab experiment with a service robot (Study 2). Theoretical and practical implications are discussed.

From Practical Contributions

Second, our findings also suggest that organizations should focus on encouraging perceptions of service robots’ experience rather than agency. For example, when assigning names to robots or programming robots’ voices, a female name and voice could potentially lead to enhanced perceptions of experience more so than a male name and voice (Gray et al., 2007). Likewise, service robots’ programmed scripts should include content that conveys the capacity of experience, such as displaying emotions. Although
the emerging service robotic technologies are not perfect and failures are inevitable, encouraging anthropomorphism and, more specifically, perceptions of experience can likely offset the negative effects of robot service failures.