Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 1, 2025

Why Most Resist AI Companions

De Freitas, J., et al. (2025).
(Working Paper No. 25–030).

Abstract

Chatbots are now able to form emotional relationships with people and alleviate loneliness—a growing public health concern. Behavioral research provides little insight into whether everyday people are likely to use these applications and why. We address this question by focusing on the context of “AI companion” applications, designed to provide people with synthetic interaction partners. Study 1 shows that people believe AI companions are more capable than human companions in advertised respects relevant to relationships (being more available and nonjudgmental). Even so, they view them as incapable of realizing the underlying values of relationships, like mutual caring, judging them as not ‘true’ relationships. Study 2 provides further insight into this belief: people believe relationships with AI companions are one-sided
(rather than mutual), because they see AI as incapable of understanding and feeling emotion. Study 3 finds that actually interacting with an AI companion increases acceptance by changing beliefs about the AI’s advertised capabilities, but not about its ability to achieve the true values of relationships, demonstrating the resilience of this belief against intervention. In short, despite the potential loneliness-reducing benefits of AI companions, we uncover fundamental psychological barriers to adoption, suggesting these benefits will not be easily realized.

Here are some thoughts:

The research explores why people remain reluctant to adopt AI companions, despite the growing public health crisis of loneliness and the promise that AI might offer support. Through a series of studies, the authors identify deep-seated psychological barriers to embracing AI as a substitute or supplement for human connection. Specifically, people tend to view AI companions as fundamentally incapable of embodying the core features of meaningful relationships—such as mutual care, genuine emotional understanding, and shared experiences. While participants often acknowledged some of the practical benefits of AI companionship, such as constant availability and non-judgmental interaction, they consistently doubted that AI could offer authentic or reciprocal relationships. Even when people interacted directly with AI systems, their impressions of the AI’s functional abilities improved, but their skepticism around the emotional and relational authenticity of AI companions remained firmly in place. These findings suggest that the resistance is not merely technological or unfamiliarity-based, but rooted in beliefs about what makes relationships "real."

For psychologists, this research is particularly important because it sheds light on how people conceptualize emotional connection, authenticity, and support—core concerns in both clinical and social psychology. As mental health professionals increasingly confront issues of social isolation, understanding the limitations of AI in replicating genuine human connection is critical. Psychologists might be tempted to view AI companions as possible interventions for loneliness, especially for individuals who are socially isolated or homebound. However, this paper underscores that unless these deep psychological barriers are acknowledged and addressed, such tools may be met with resistance or prove insufficient in fulfilling emotional needs. Furthermore, the study contributes to a broader understanding of human-technology relationships, offering insights into how people emotionally and cognitively differentiate between human and artificial agents. This knowledge is crucial for designing future interventions, therapeutic tools, and technologies that are sensitive to the human need for authenticity, reciprocity, and emotional depth in relationships.

Monday, March 31, 2025

AI can help people feel heard, but an AI label diminishes this impact

Yin, Y., Jia, N., & Wakslak, C. J. (2024).
PNAS, 121(14).

Abstract

People want to “feel heard” to perceive that they are understood, validated, and valued. Can AI serve the deeply human function of making others feel heard? Our research addresses two fundamental issues: Can AI generate responses that make human recipients feel heard, and how do human recipients react when they believe the response comes from AI? We conducted an experiment and a follow-up study to disentangle the effects of actual source of a message and the presumed source. We found that AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human). Finally, in a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal. Our research underscores the potential and limitations of AI in meeting human psychological needs. These findings suggest that while AI demonstrates enhanced capabilities to provide emotional support, the devaluation of AI responses poses a key challenge for effectively leveraging AI’s capabilities.

Significance

As AI becomes more embedded in daily life, understanding its potential and limitations in meeting human psychological needs becomes more pertinent. Our research explores the fundamental human desire to “feel heard.” It reveals that while AI can generate responses that make people feel heard, individuals feel more heard when they believe a response comes from a fellow human. These findings highlight the potential of AI to augment human capacity for understanding and communication while also raising important conceptual questions about the meaning of being heard, as well as practical questions about how to best to leverage AI’s capabilities to support greater human flourishing.

Here are some thoughts.

This study explores whether people can feel heard by AI, examining recipients' perspectives and related perceptions and emotions after receiving a response from AI or a human. It was hypothesized that AI would be better than humans at detecting and understanding emotions. However, the researchers also expected that people would feel less heard by AI due to the perception that AI lacks a mind and cannot think or feel, and because of negative attitudes towards AI.

The study found that AI was statistically significantly better at eliciting feelings of being heard, with recipients reporting feeling more heard, perceiving the response to be more accurate, feeling more understood by the responder, and feeling more connected to the responder when the response was generated by AI. However, recipients reported more positive reactions when they believed the response came from a human, demonstrating a devaluation of AI-generated responses.

The researchers conclude that AI can make people feel heard, but this is influenced both by the quality of the response and the perception of the responder. They suggest that as people encounter and use AI more, they may feel more heard by AI, but feelings of connection may still depend on perceiving AI as having a mind.

Sunday, March 30, 2025

Being Human in the Age of Generative AI: Young People’s Ethical Concerns about Writing and Living with Machines.

Higgs, J. M., & Stornaiuolo, A. (2024b).
Reading Research Quarterly.

Abstract

The recent unveiling of chatbots such as ChatGPT has catalyzed vigorous debates about generative AI's impact on how learners read, write, and communicate. Largely missing from these debates is careful consideration of how young people are experiencing AI in their everyday lives and how they are making sense of the questions that these rapidly evolving cultural tools raise about ethics, power, and social participation. Engaging cultural-historical perspectives on technology, the present study drew on student survey and focus group data from English language arts classes in two culturally and linguistically diverse high schools to answer the following questions: (1) How are young people using AI in their everyday lives, if at all?; (2) What do young people identify as key considerations related to AI-mediated writing?; and (3) What ethical and critical considerations, if any, inform young people's sensemaking of and practices with AI? Young people reported using generative AI for diverse purposes in and out of school, including to accomplish routine organizational and information tasks, to entertain themselves through experimenting with AI technologies, and to catalyze their thinking and writing processes. Survey and focus group participants' responses suggested their regular navigation of ethical and critical dimensions of AI use and their contemplation of what it means to be human through and with advancing technologies. Young people also reported a lack of opportunity to examine AI practices and perspectives in school, suggesting the important role schools can play in supporting youths' development of AI ethics.

Here are some thoughts.

This study explored high school students' use and understanding of artificial intelligence (AI) writing tools. Researchers collected survey and focus group data from English language arts classes in two high schools. They found that students use AI tools for various purposes in and out of school, including completing tasks, exploring AI technologies, and enhancing their thinking and writing. Students also reported regularly considering the ethical and critical implications of AI use, such as the impact on authenticity, creativity, and what it means to be human. The study suggests that schools have an important role in helping students develop AI ethics.

Saturday, March 29, 2025

Why should humanities education persist in an AI age?

Johannes Steizinger
The Conversation
Originally published 3 Feb 25

Since the launch of ChatGPT in November 2022, the use of artificial intelligence (AI) chatbots has become rampant among students in higher education.

While some might be ambivalent about the impact of generative AI on higher education, many instructors in the humanities scramble to adapt their classes to the new reality and have declared a crisis of their teaching model.

Professors and students alike argue that unrestricted use of generative AI threatens the purpose of an education in disciplines like philosophy, history or literature. They say that, as a society, we should care about this loss of intellectual competencies.

But why is it important that traditional learning not become obsolete — as some predict?

Today, when corrupt leaders promote AI development, AI reflects repressive political biases. There are serious concerns about AI disinformation, so it’s critical to consider the original purpose of modern universities.

I consider this question as a historian of philosophy who has examined how modern ideas have intersected with democratic and fascist societies.


Here are some thoughts:

The article argues that while AI excels at technical and data-driven tasks, it cannot replicate the deeply human skills fostered by the humanities, such as critical thinking, ethical reasoning, creativity, and self-reflection. These skills are not only essential for navigating the complexities of an AI-driven society but also align closely with the core concerns of psychology, making the article particularly important for psychologists.

One of the key takeaways for psychologists is the article’s emphasis on human-centric skills, such as empathy, emotional intelligence, and ethical reasoning. These are areas where AI falls short, and they are also foundational to psychological practice. Whether in therapy, research, or education, psychologists rely on these skills to understand and support human behavior and well-being. The article’s focus on the humanities as a means of developing these abilities reinforces their importance in both personal and professional contexts. Additionally, as AI becomes more integrated into society, psychologists are increasingly called upon to address ethical dilemmas related to its use, such as algorithmic bias, privacy concerns, and the psychological impact of AI on individuals. The humanities provide a valuable framework for exploring these ethical questions, which aligns with the ethical responsibilities of psychologists.

The article also highlights the role of humanities education in fostering emotional intelligence and well-being, which are central to mental health. In a world where AI may dehumanize certain interactions, the ability to connect with others on an emotional level becomes even more critical. Psychologists can draw on this perspective to advocate for educational approaches that prioritize emotional and social learning, ensuring that individuals are equipped to thrive in an AI-driven world. Furthermore, the article bridges the gap between technology and the humanities, suggesting that interdisciplinary collaboration is essential for addressing modern challenges. This resonates with the work of psychologists, who often operate at the intersection of multiple fields, integrating insights from the humanities, social sciences, and technology to better understand and address human behavior.

Another important aspect of the article is its discussion of the future of work. As AI automates many technical tasks, the demand for uniquely human skills—such as creativity, critical thinking, and interpersonal communication—will grow. Psychologists can play a key role in helping individuals and organizations adapt to these changes, and the article provides a strong rationale for why humanities education is a vital part of this preparation. Finally, the article emphasizes the role of the humanities in fostering self-development and a deeper understanding of one’s identity and purpose. This aligns with psychological theories of self-actualization and personal growth, making it particularly relevant for psychologists who work in areas like counseling, coaching, and personal development.

Friday, March 28, 2025

Simulating 500 million years of evolution with a language model

Hayes, T., Rao, R., et al. (2025).
Science.

Abstract

More than three billion years of evolution have produced an image of biology encoded into the space of natural proteins. Here we show that language models trained at scale on evolutionary data can generate functional proteins that are far away from known proteins. We present ESM3, a frontier multimodal generative language model that reasons over the sequence, structure, and function of proteins. ESM3 can follow complex prompts combining its modalities and is highly responsive to alignment to improve its fidelity. We have prompted ESM3 to generate fluorescent proteins. Among the generations that we synthesized, we found a bright fluorescent protein at a far distance (58% sequence identity) from known fluorescent proteins, which we estimate is equivalent to simulating five hundred million years of evolution.


Here are some thoughts:

A groundbreaking advancement in evolutionary biology and artificial intelligence has emerged with the development of ESM3, a cutting-edge multimodal generative language model capable of simulating the evolution of proteins over hundreds of millions of years. ESM3 leverages principles of language modeling to reason across the sequence, structure, and function of proteins, enabling the creation of novel proteins with unprecedented diversity and functionality. This innovation is built on scalable architecture, utilizing 98 billion parameters trained on billions of protein sequences and structures. Through this extensive training, ESM3 generates proteins that align with complex biological prompts, uncovering regions of protein design previously unexplored by natural evolution.

Among its remarkable achievements, ESM3 successfully created a novel fluorescent protein named esmGFP, which is evolutionarily distinct from known proteins, effectively simulating over 500 million years of natural evolutionary progress. Using token-based training, ESM3 predicts and generates protein sequences and structures with extraordinary fidelity to natural patterns. The model’s iterative fine-tuning process enhances its biological alignment, improving its ability to solve intricate design challenges such as ligand binding and tertiary coordination tasks. Moreover, ESM3 enables programmable control, offering scientists the ability to design proteins with specified traits, such as fluorescence, while maintaining their functional integrity.

This innovative approach holds transformative potential for biotechnology, facilitating the rapid design of proteins for applications ranging from medicine to materials science. ESM3’s ability to simulate and surpass the constraints of natural evolution marks a new frontier in computational biology, driven by the synergy of artificial intelligence and evolutionary science. By unlocking new possibilities in protein design, ESM3 is poised to redefine the boundaries of what is achievable in both theoretical and applied biosciences.

Thursday, March 27, 2025

How Moral Case Deliberation Supports Good Clinical Decision Making

Inguaggiato, G., et al. (2019).
The AMA Journal of Ethic, 21(10),
E913-919.

Abstract

In clinical decision making, facts are presented and discussed, preferably in the context of both evidence-based medicine and patients’ values. Because clinicians’ values also have a role in determining the best courses of action, we argue that reflecting on both patients’ and professionals’ values fosters good clinical decision making, particularly in situations of moral uncertainty. Moral case deliberation, a form of clinical ethics support, can help elucidate stakeholders’ values and how they influence interpretation of facts. This article demonstrates how this approach can help clarify values and contribute to good clinical decision making through a case example.

Here are some thoughts:

This article discusses how moral case deliberation (MCD) supports good clinical decision-making. It argues that while evidence-based medicine and patient values are crucial, clinicians' values also play a significant role, especially in morally uncertain situations. MCD, a form of clinical ethics support, helps clarify the values of all stakeholders and how these values influence the interpretation of facts. The article explains how MCD differs from shared decision-making, emphasizing its focus on ethical dilemmas and understanding moral uncertainty among caregivers rather than reaching a shared decision with the patient. Through dialogue and a structured approach, MCD facilitates a deeper understanding of the situation, leading to better-informed and morally sensitive clinical decisions. The article uses a case study from a neonatal intensive care unit to illustrate how MCD can help resolve disagreements and uncertainties by exploring the different values held by nurses and physicians.

Wednesday, March 26, 2025

Surviving and thriving in spite of hate: Burnout and resiliency in clinicians working with patients attracted by violent extremism

Rousseau, C.,  et al. (2025).
The American journal of orthopsychiatry,
10.1037/ort0000832.
Advance online publication.

Abstract

Violent extremism (VE) is often manifested through hate discourses, which are hurtful for their targets, shatter social cohesion, and provoke feelings of impending threat. In a clinical setting, these discourses may affect clinicians in different ways, eroding their capacity to provide care. This clinical article describes the subjective experiences and the coping strategies of clinicians engaged with individuals attracted by VE. A focus group was held with eight clinicians and complemented with individual interviews and field notes. Clinicians reported four categories of personal consequences. First, results show that the effect of massive exposure to hate discourses is associated with somatic manifestations and with the subjective impression of being dirty. Second, clinicians endorse a wide range of work-related affects, ranging from intense fear, anger, and irritation to sadness and numbing. Third, they perceive that their work has relational consequences on their families and friends. Last, clinicians also describe that their work transforms their vision of the world. In terms of coping strategies, team relations and a community of practice were identified as supportive. With time, the pervasive uncertainty, the relative lack of institutional support, and the work-related emotional burden are associated with disengagement and burnout, in particular in practitioners working full-time with this clientele. Working with clients attracted to or engaged in VE is very demanding for clinicians. To mitigate the emotional burden of being frequently confronted with hate and threats, team relations, decreasing clinical exposure, and avoiding heroic positions help prevent burnout.


Here are some thoughts:

The article explores the psychological impact on clinicians treating individuals drawn to violent extremism (VE). It documents how prolonged exposure to hate discourse can lead to somatic symptoms (e.g., nausea, headaches), emotional exhaustion, hypervigilance, and a sense of being "contaminated" by hate. Clinicians reported struggling with moral dilemmas, fearing responsibility if a patient acts violently, and experiencing disruptions in their personal relationships.

Despite these challenges, team support, supervision, humor, and structured work boundaries were identified as critical resilience factors. The study highlights the need for institutional backing and clinician training to manage moral distress, avoid burnout, and sustain ethical engagement with patients who espouse extremist views.

Tuesday, March 25, 2025

Reasoning and empathy are not competing but complementary features of altruism

Law, K. F., et al. (2025, February 8).
PsyArXiv

Abstract

Humans can care about distant strangers, an adaptive advantage that enables our species to cooperate in increasingly large-scale groups. Theoretical frameworks accounting for an expansive moral circle and altruistic behavior are often framed as a dichotomy between competing pathways of emotion-driven empathy versus logic-driven reasoning. Here, in a pre-registered investigation comparing variations in empathy and reasoning capacities across different exceptionally altruistic populations –– effective altruists (EAs) who aim to maximize welfare gains with their charitable contributions (N = 119) and extraordinary altruists (XAs) who have donated organs to strangers (N = 65) –– alongside a third sample of demographically-similar general population controls (N = 176), we assess how both capacities contribute to altruistic behaviors that transcend conventional parochial boundaries. We find that, while EAs generally manifest heightened reasoning ability and XAs heightened empathic ability, both empathy and reasoning independently predict greater engagement in equitable and effective altruism on laboratory measures and behavioral tasks. Interaction effects suggest combining empathy and reasoning often yields the strongest willingness to prioritize welfare impartially and maximize impact. These results highlight complementary roles for empathy and reasoning in overcoming biases that constrain altruism, supporting a unified framework for expansive altruism and challenging the empathy-reasoning dichotomy in existing theory.

The article is linked above.

Here are some thoughts:

This research challenges the traditional dichotomy between empathy and reasoning in altruistic behavior. Rather than viewing them as opposing forces, the study argues that both cognitive and emotional capacities contribute independently to altruistic actions that transcend parochial biases. To explore this, the researchers examined three groups: Effective Altruists (EAs), who emphasize reasoned decision-making to maximize the welfare impact of their charitable actions; Extraordinary Altruists (XAs), who have demonstrated extreme altruism by donating organs to strangers; and a demographically similar general population control group.

The findings reveal that EAs tend to exhibit stronger reasoning abilities, while XAs demonstrate heightened empathy. However, both cognitive and emotional capacities play crucial roles in fostering altruism that prioritizes impartial welfare and maximizes impact. This challenges the prevailing notion that empathy is inherently biased and ineffective in promoting broad, equitable altruism. Instead, the study suggests that empathy, when cultivated, can complement reasoning to enhance prosocial motivation. Furthermore, while XAs engage in altruistic behavior primarily driven by emotional responses, EAs rely more on deliberative reasoning. Despite these differences, both groups demonstrate a commitment to helping distant others, suggesting that there are distinct but overlapping pathways to altruism.

For psychologists and other mental health professionals, these findings have significant implications. Understanding the cognitive and emotional foundations of altruism can inform therapeutic interventions aimed at fostering prosocial behavior in individuals who struggle with social engagement, such as those with psychopathy or social anhedonia. Additionally, the research challenges assumptions about empathy, showing that it can be expanded beyond parochial biases, which is particularly relevant for training programs that aim to develop empathy in clinicians, social workers, and caregivers. The study also contributes to broader ethical and moral discussions about how to encourage compassionate and rational decision-making in fields such as healthcare, philanthropy, and policymaking. Ultimately, this research highlights the importance of integrating both empathy and reasoning in efforts to promote altruism, offering valuable insights for psychology, psychotherapy, and social work.

Monday, March 24, 2025

Relational Norms for Human-AI Cooperation

Earp, B.D, et al. (2025).
arXiv.com

Abstract

How we should design and interact with so-called “social” artificial intelligence (AI) depends, in part, on the socio-relational role the AI serves to emulate or occupy. In human society, different types of social relationship exist (e.g., teacher-student, parent-child, neighbors, siblings, and so on) and are associated with distinct sets of prescribed (or proscribed) cooperative functions, including hierarchy, care, transaction, and mating. These relationship-specific patterns of prescription and proscription (i.e., “relational norms”) shape our judgments of what is appropriate or inappropriate for each partner within that relationship. Thus, what is considered ethical, trustworthy, or cooperative within one relational context, such as between friends or romantic partners, may not be considered as such within another relational context, such as between strangers, housemates, or work colleagues. Moreover, what is appropriate for one partner within a relationship, such as a boss giving orders to their employee, may not be appropriate for the other relationship partner (i.e., the employee giving orders to their boss) due to the relational norm(s) associated with that dyad in the relevant context (here, hierarchy and transaction in a workplace context). Now that artificially intelligent “agents” and chatbots powered by large language models (LLMs), are increasingly being designed and used to fill certain social roles and relationships that are analogous to those found in human societies (e.g., AI assistant, AI mental health provider, AI tutor, AI “girlfriend” or “boyfriend”), it is imperative to determine whether or how human-human relational norms will, or should, be applied to human-AI relationships. Here, we systematically examine how AI systems' characteristics that differ from those of humans, such as their likely lack of conscious experience and immunity to fatigue, may affect their ability to fulfill relationship-specific cooperative functions, as well as their ability to (appear to) adhere to corresponding relational norms. We also highlight the "layered" nature of human-AI relationships, wherein a third party (the AI provider) mediates and shapes the interaction. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms—for a wide range of relationship types—will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

Here are some thoughts:

This article details the intricate dynamics of how artificial intelligence (AI) systems, particularly those designed to mimic social roles, should interact with humans in a manner that is both ethically sound and socially beneficial. Authored by a diverse team of experts from various disciplines, the paper posits that understanding and applying human-human relational norms to human-AI interactions is essential for fostering ethical, trustworthy, and advantageous outcomes. The authors draw upon the Relational Norms model, which identifies four primary cooperative functions in human relationships—care, transaction, hierarchy, and mating—that guide behavior and expectations within different types of relationships, such as parent-child, teacher-student, or romantic partnerships.

As AI systems increasingly occupy social roles traditionally held by humans, such as assistants, tutors, and companions, the paper examines how AI's unique characteristics, such as the lack of consciousness and immunity to fatigue, influence their ability to fulfill these roles and adhere to relational norms. A significant aspect of human-AI relationships highlighted in the document is their "layered" nature, where a third party—the AI provider—mediates and shapes the interaction. This structure can introduce risks, such as changes in AI behavior or the monetization of user interactions, which may not align with the user's best interests.

The authors emphasize the importance of transparency in AI design, urging developers to clearly communicate the capabilities, limitations, and data practices of their systems to prevent exploitation and build trust. They also call for adaptive regulatory frameworks that consider the specific relational contexts of AI systems, ensuring user protection and ethical alignment. Users, too, are encouraged to educate themselves about AI and relational norms to engage more effectively and safely with these technologies. The paper concludes by advocating for ongoing interdisciplinary research and collaboration to address the evolving challenges posed by AI in social roles, ensuring that AI systems are developed and governed in ways that respect human values and contribute positively to society.