Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 15, 2024

Folk psychological attributions of consciousness to large language models

Colombatto, C., & Fleming, S. M. (2024).
Neuroscience of consciousness, 2024(1)


Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations ('phenomenal consciousness'). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality-but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions-with potential implications for the legal and ethical status of AI.


In summary, our investigation of folk psychological attributions of consciousness revealed that most  people are willing  to attribute some form of phenomenality to LLMs: only a third of our sample thought that ChatGPT definitely did not have subjective experience, while two-thirds of our sample thought that  ChatGPT had varying degrees of phenomenal consciousness. The relatively high rates of consciousness  attributions in this sample are somewhat surprising, given that experts in neuroscience and consciousness science currently estimate that LLMs are highly unlikely to be conscious (Butlin et al. 2023, LeDoux et al. 2023). These findings thus highlight a discrepancy between folk  intuitions and expert  opinions on artificial consciousness—with significant implications for the ethical, legal, and moral status of AI.


The paper examines how people attribute consciousness and mental states to large language models (LLMs) like GPT-3 based on folk psychology intuitions.  Folk psychology refers to how laypeople reason about minds and mental states based on observable behavior.

Attributions of Consciousness

In two studies, participants interacted with GPT-3 and rated the extent to which they attributed various mental states to the LLM.  Participants attributed significant levels of consciousness, including the ability to experience emotions, have beliefs, and understand language.  However, attributions were lower for more advanced capacities like self-awareness and intentionality.

Factors Influencing Attributions

Attributions increased the more the LLM's responses appeared coherent, thoughtful and human-like.
Attributions decreased when participants were reminded that the LLM is an artificial system without subjective experiences. Individuals' beliefs about machine consciousness and their exposure to AI also impacted attributions.


The studies reveal a tendency for people to anthropomorphize and over-attribute mental capacities to sophisticated language models based on their surface behavior.  This has implications for managing public expectations around AI capabilities and potential risks of deception. The findings highlight the need for transparency about the actual cognitive architectures underlying LLMs to mitigate misunderstandings.

In summary, the research demonstrates how people's folk psychology leads them to project human-like mental states onto LLMs in ways that may not accurately reflect the systems' true capabilities and nature.