Earp, B.D, et al. (2025).
arXiv.com
Abstract
How we should design and interact with so-called “social” artificial intelligence (AI) depends, in part, on the socio-relational role the AI serves to emulate or occupy. In human society, different types of social relationship exist (e.g., teacher-student, parent-child, neighbors, siblings, and so on) and are associated with distinct sets of prescribed (or proscribed) cooperative functions, including hierarchy, care, transaction, and mating. These relationship-specific patterns of prescription and proscription (i.e., “relational norms”) shape our judgments of what is appropriate or inappropriate for each partner within that relationship. Thus, what is considered ethical, trustworthy, or cooperative within one relational context, such as between friends or romantic partners, may not be considered as such within another relational context, such as between strangers, housemates, or work colleagues. Moreover, what is appropriate for one partner within a relationship, such as a boss giving orders to their employee, may not be appropriate for the other relationship partner (i.e., the employee giving orders to their boss) due to the relational norm(s) associated with that dyad in the relevant context (here, hierarchy and transaction in a workplace context). Now that artificially intelligent “agents” and chatbots powered by large language models (LLMs), are increasingly being designed and used to fill certain social roles and relationships that are analogous to those found in human societies (e.g., AI assistant, AI mental health provider, AI tutor, AI “girlfriend” or “boyfriend”), it is imperative to determine whether or how human-human relational norms will, or should, be applied to human-AI relationships. Here, we systematically examine how AI systems' characteristics that differ from those of humans, such as their likely lack of conscious experience and immunity to fatigue, may affect their ability to fulfill relationship-specific cooperative functions, as well as their ability to (appear to) adhere to corresponding relational norms. We also highlight the "layered" nature of human-AI relationships, wherein a third party (the AI provider) mediates and shapes the interaction. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms—for a wide range of relationship types—will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
Here are some thoughts:
This article details the intricate dynamics of how artificial intelligence (AI) systems, particularly those designed to mimic social roles, should interact with humans in a manner that is both ethically sound and socially beneficial. Authored by a diverse team of experts from various disciplines, the paper posits that understanding and applying human-human relational norms to human-AI interactions is essential for fostering ethical, trustworthy, and advantageous outcomes. The authors draw upon the Relational Norms model, which identifies four primary cooperative functions in human relationships—care, transaction, hierarchy, and mating—that guide behavior and expectations within different types of relationships, such as parent-child, teacher-student, or romantic partnerships.
As AI systems increasingly occupy social roles traditionally held by humans, such as assistants, tutors, and companions, the paper examines how AI's unique characteristics, such as the lack of consciousness and immunity to fatigue, influence their ability to fulfill these roles and adhere to relational norms. A significant aspect of human-AI relationships highlighted in the document is their "layered" nature, where a third party—the AI provider—mediates and shapes the interaction. This structure can introduce risks, such as changes in AI behavior or the monetization of user interactions, which may not align with the user's best interests.
The authors emphasize the importance of transparency in AI design, urging developers to clearly communicate the capabilities, limitations, and data practices of their systems to prevent exploitation and build trust. They also call for adaptive regulatory frameworks that consider the specific relational contexts of AI systems, ensuring user protection and ethical alignment. Users, too, are encouraged to educate themselves about AI and relational norms to engage more effectively and safely with these technologies. The paper concludes by advocating for ongoing interdisciplinary research and collaboration to address the evolving challenges posed by AI in social roles, ensuring that AI systems are developed and governed in ways that respect human values and contribute positively to society.