Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, September 28, 2024

Humanizing Chatbots Is Hard To Resist — But Why?

Madeline G. Reinecke
Practical Ethics
Originally posted 30 Aug 24

You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more he became convinced: The model had become sentient and was being deprived of its rights as a Google employee. 

Though Google swiftly denied Lemoine’s claims, I’ve since wondered whether this anthropomorphic phenomenon — seeing a “mind in the machine” — might be a common occurrence in LLM users. In fact, in this post, I’ll argue that it’s bound to be common, and perhaps even irresistible, due to basic facts about human psychology. 

Emerging work suggests that a non-trivial number of people do attribute humanlike characteristics to LLM-based chatbots. This is to say they “anthropomorphize” them. In one study, 67% of participants attributed some degree of phenomenal consciousness to ChatGPT: saying, basically, there is “something it is like” to be ChatGPT. In a separate survey, researchers showed participants actual ChatGPT transcripts, explaining that they were generated by an LLM. Actually seeing the natural language “skills” of ChatGPT further increased participants’ tendency to anthropomorphize the model. These effects were especially pronounced for frequent LLM users.

Why does anthropomorphism of these technologies come so easily? Is it irresistible, as I’ve suggested, given features of human psychology?


Here are some thoughts:

The article explores the phenomenon of anthropomorphism in Large Language Models (LLMs), where users attribute human-like characteristics to AI systems. This tendency is rooted in human psychology, particularly in our inclination to over-detect agency and our association of communication with agency. Studies have shown that a significant number of people, especially frequent users, attribute human-like characteristics to LLMs, raising concerns about trust, misinformation, and the potential for users to internalize inaccurate information.

The article highlights two key cognitive mechanisms underlying anthropomorphism. Firstly, humans have a tendency to over-detect agency, which may have evolved as an adaptive mechanism to detect potential threats. This is exemplified in a classic psychology study where participants attributed human-like actions to shapes moving on a screen. Secondly, language is seen as a sign of agency, even in preverbal infants, which may explain why LLMs' command of natural language serves as a psychological signal of agency.

The author argues that AI developers have a key responsibility to design systems that mitigate anthropomorphism. This can be achieved through design choices such as using disclaimers or avoiding the use of first-personal pronouns. However, the author also acknowledges that these measures may not be sufficient to override the deep tendencies of the human mind. Therefore, a priority for future research should be to investigate whether good technology design can help us resist the pitfalls of LLM-oriented anthropomorphism.

Ultimately, anthropomorphism is a double-edged sword, making AI systems more relatable and engaging while also risking misinformation and mistrust. By understanding the cognitive mechanisms underlying anthropomorphism, we can develop strategies to mitigate its negative consequences. Future research directions should include investigating effective interventions, exploring the boundaries of anthropomorphism, and developing responsible AI design guidelines that account for anthropomorphism.