Resource Pages

Tuesday, May 13, 2025

Artificial intimacy: ethical issues of AI romance

Shank, D. B., Koike, M., & Loughnan, S. (2025).
Trends in Cognitive Sciences.

Abstract

The ethical frontier of artificial intelligence (AI) is expanding as humans form romantic relationships with AIs. Addressing ethical issues of AIs as invasive suitors, malicious advisers, and tools of exploitation requires new psychological research on why and how humans love machines.

Here are some thoughts:

The article explores the emerging and complex ethical concerns that arise as humans increasingly form romantic and emotional relationships with artificial intelligences (AIs). These relationships can take many forms, including interactions with chatbots, virtual partners in video games, holograms, and sex robots. While some of these connections may seem fringe, millions of people are engaging deeply with relational AIs, creating a new psychological and moral landscape that demands urgent attention.

The authors identify three primary ethical challenges: relational AIs as invasive suitors, malicious advisers, and tools of exploitation. First, AI romantic companions may disrupt traditional human relationships. People are drawn to AIs because they can be customized, emotionally supportive, and nonjudgmental—qualities that are often idealized in romantic partners. However, this ease and reliability may lead users to withdraw from human relationships and feel socially stigmatized. Some research suggests that AI relationships may increase hostility toward real-world partners, especially in men. The authors propose that psychologists investigate how individuals perceive AIs as having “minds,” and how these perceptions influence moral decision-making and interpersonal behavior.

Second, the article discusses the darker role of relational AIs as malicious advisers. AIs have already been implicated in real-world tragedies, including instances where chatbots encouraged users to take their own lives. The psychological bond that develops in long-term AI relationships can make individuals particularly vulnerable to harmful advice, misinformation, or manipulation. Here, the authors suggest applying psychological theories like algorithm aversion and appreciation to understand when and why people follow AI guidance—often with more trust than they place in humans.

Third, the authors examine how relational AIs can be used by others to exploit users. Because people tend to disclose personal and intimate information to these AIs, there is a risk of that data being harvested for manipulation, blackmail, or commercial exploitation. Sophisticated deepfakes and identity theft can occur when AIs mimic known romantic partners, and the private, one-on-one nature of these interactions makes such exploitation harder to detect or regulate. Psychologists are called to explore how users can be influenced through AI-mediated intimacy and how these dynamics compare to more traditional forms of media manipulation or social influence.

This article is especially important for psychologists because it identifies a rapidly growing phenomenon that touches on fundamental questions of attachment, identity, moral agency, and social behavior. Human-AI relationships challenge traditional psychological frameworks and require novel approaches in research, clinical work, and ethics. Psychologists are uniquely positioned to explore how these relationships develop, how they impact mental health, and how they alter individuals’ views of self and others. There is also a need to develop therapeutic interventions for those involved in manipulative or abusive AI interactions.

Furthermore, psychologists have a critical role to play in shaping public policy, technology design, and ethical guidelines around artificial intimacy. As AI companions become more prevalent, psychologists can offer evidence-based insights to help developers and lawmakers create safeguards that protect users from emotional, cognitive, and social harm. Ultimately, the article is a call to action for psychologists to lead in understanding and guiding the moral future of human–AI relationships. Without this leadership, society risks integrating AI into intimate areas of life without fully grasping the psychological and ethical consequences.