De Freitas, J., et al. (2025).
(Working Paper No. 25–030).
Abstract
Chatbots are now able to form emotional relationships with people and alleviate loneliness—a growing public health concern. Behavioral research provides little insight into whether everyday people are likely to use these applications and why. We address this question by focusing on the context of “AI companion” applications, designed to provide people with synthetic interaction partners. Study 1 shows that people believe AI companions are more capable than human companions in advertised respects relevant to relationships (being more available and nonjudgmental). Even so, they view them as incapable of realizing the underlying values of relationships, like mutual caring, judging them as not ‘true’ relationships. Study 2 provides further insight into this belief: people believe relationships with AI companions are one-sided
(rather than mutual), because they see AI as incapable of understanding and feeling emotion. Study 3 finds that actually interacting with an AI companion increases acceptance by changing beliefs about the AI’s advertised capabilities, but not about its ability to achieve the true values of relationships, demonstrating the resilience of this belief against intervention. In short, despite the potential loneliness-reducing benefits of AI companions, we uncover fundamental psychological barriers to adoption, suggesting these benefits will not be easily realized.
Here are some thoughts:
The research explores why people remain reluctant to adopt AI companions, despite the growing public health crisis of loneliness and the promise that AI might offer support. Through a series of studies, the authors identify deep-seated psychological barriers to embracing AI as a substitute or supplement for human connection. Specifically, people tend to view AI companions as fundamentally incapable of embodying the core features of meaningful relationships—such as mutual care, genuine emotional understanding, and shared experiences. While participants often acknowledged some of the practical benefits of AI companionship, such as constant availability and non-judgmental interaction, they consistently doubted that AI could offer authentic or reciprocal relationships. Even when people interacted directly with AI systems, their impressions of the AI’s functional abilities improved, but their skepticism around the emotional and relational authenticity of AI companions remained firmly in place. These findings suggest that the resistance is not merely technological or unfamiliarity-based, but rooted in beliefs about what makes relationships "real."
For psychologists, this research is particularly important because it sheds light on how people conceptualize emotional connection, authenticity, and support—core concerns in both clinical and social psychology. As mental health professionals increasingly confront issues of social isolation, understanding the limitations of AI in replicating genuine human connection is critical. Psychologists might be tempted to view AI companions as possible interventions for loneliness, especially for individuals who are socially isolated or homebound. However, this paper underscores that unless these deep psychological barriers are acknowledged and addressed, such tools may be met with resistance or prove insufficient in fulfilling emotional needs. Furthermore, the study contributes to a broader understanding of human-technology relationships, offering insights into how people emotionally and cognitively differentiate between human and artificial agents. This knowledge is crucial for designing future interventions, therapeutic tools, and technologies that are sensitive to the human need for authenticity, reciprocity, and emotional depth in relationships.