Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, March 9, 2025

Digital Mirrors: AI Companions and the Self

Kouros, T., & Papa, V. (2024).
Societies, 14(10), 200.

Abstract

This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. Findings revealed that users often form emotional attachments to their AICs, viewing them as empathetic and supportive, thus enhancing emotional well-being. This study highlights how AI companions provide a safe space for self-expression and identity exploration, often without fear of judgment, offering a backstage setting in Goffmanian terms. This research contributes to the discourse on AI’s societal integration, emphasizing how, in interactions with AICs, users often craft and experiment with their identities by acting in ways they would avoid in face-to-face or human-human online interactions due to fear of judgment. This reflects front-stage behavior, in which users manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing deeper aspects of the self.

Here are some thoughts:

The article investigates how users interact with Artificial Intelligence Companions (AICs) like Replika, focusing on self-presentation, emotional well-being, and identity exploration. Through qualitative methods such as interviews and digital ethnography, the study reveals that users often form deep emotional bonds with AICs, viewing them as empathetic and supportive companions. These interactions provide a judgment-free space for self-expression, particularly for those experiencing loneliness or social isolation. However, this emotional dependency raises concerns about the long-term implications of substituting human connections with AI. Additionally, AICs serve as a "backstage" space where users feel safe to experiment with different aspects of their identity, presenting idealized versions of themselves or engaging in role-playing. While users appreciate the AI's human-like responses, some remain aware of its artificial nature, leading to mixed feelings about the authenticity of these relationships.

Despite the benefits, the study highlights significant ethical and privacy concerns. Users worry about how their data is used and seek greater transparency from AI developers. The research underscores the need for robust ethical frameworks to ensure AI technologies enhance emotional well-being without compromising personal integrity or societal values. By balancing the advantages of AI companionship with awareness of its limitations, the study contributes to the broader discourse on human-AI interactions, emphasizing the importance of responsible AI integration into daily life.

Saturday, March 8, 2025

The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?

Demichelis, R. (2024, November 19).
arXiv.org.

This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks), not only in terms of techniques but also in our interactions with machines. It also addresses the philosophical tradition of hermeneutics (Don Ihde, Wilhelm Dilthey) to highlight a parallel with this movement and to demystify the idea of human-like AI.


Here are some thoughts:

This paper examines how modern AI systems, like ChatGPT, have evolved from simply executing commands to interpreting ambiguous human language. The paper draws on the tradition of hermeneutics to argue that while AI can mimic interpretation through data processing, it lacks the genuine understanding and imaginative insight characteristic of human cognition. This mechanical approximation of interpretation raises important concerns regarding transparency, bias, and ethical oversight, prompting a reevaluation of how we define knowledge and meaning in the age of AI.

Friday, March 7, 2025

Genomics yields biological and phenotypic insights into bipolar disorder

O’Connell, K. S., et al (2025).
Nature.

Abstract

Bipolar disorder is a leading contributor to the global burden of disease1. Despite high heritability (60–80%), the majority of the underlying genetic determinants remain unknown2. We analysed data from participants of European, East Asian, African American and Latino ancestries (n = 158,036 cases with bipolar disorder, 2.8 million controls), combining clinical, community and self-reported samples. We identified 298 genome-wide significant loci in the multi-ancestry meta-analysis, a fourfold increase over previous findings3, and identified an ancestry-specific association in the East Asian cohort. Integrating results from fine-mapping and other variant-to-gene mapping approaches identified 36 credible genes in the aetiology of bipolar disorder. Genes prioritized through fine-mapping were enriched for ultra-rare damaging missense and protein-truncating variations in cases with bipolar disorder4, highlighting convergence of common and rare variant signals. We report differences in the genetic architecture of bipolar disorder depending on the source of patient ascertainment and on bipolar disorder subtype (type I or type II). Several analyses implicate specific cell types in the pathophysiology of bipolar disorder, including GABAergic interneurons and medium spiny neurons. Together, these analyses provide additional insights into the genetic architecture and biological underpinnings of bipolar disorder.

Here are some thoughts:

The recent genomic study on bipolar disorder (BD) provides groundbreaking insights into its genetic architecture and biological mechanisms. By analyzing data from over 158,000 BD cases across diverse ancestries, researchers identified 298 genome-wide significant loci, marking a fourfold increase from previous findings. The study highlights distinct genetic variations associated with BD subtypes, such as bipolar I and II, and underscores the importance of GABAergic interneurons and medium spiny neurons in BD pathophysiology. Furthermore, ancestry-specific analyses reveal unique genetic contributions in East Asian populations, emphasizing the need for inclusivity in genomic research. These findings not only advance our understanding of BD but also pave the way for targeted therapies and precision medicine, offering hope for improved treatment outcomes. This landmark research underscores the value of integrating diverse genetic data to unravel complex psychiatric disorders.

Thursday, March 6, 2025

Beyond Algorithms: The Irreplaceable Human in Psychological Care

John Gavazzi
The Pennsylvania Psychologist
(2025). Advance of publication

Abstract

The rapid advancement of artificial intelligence (AI) has raised concerns about its potential to replace professional roles, including psychology. While AI demonstrates exceptional capabilities in diagnostics and pattern recognition, this article argues that psychological care remains fundamentally human. AI can assist with assessments and administrative tasks, but it lacks genuine emotional understanding, empathy, and the ability to form meaningful therapeutic relationships. Drawing on evolutionary perspectives, attachment theory, and neurobiological research, the article highlights the irreplaceable role of human connection in psychotherapy. It introduces the concept of "nostalgia jobs" to explain why professions like psychology, which embody cultural and emotional significance, resist full technological automation. Ultimately, the future of psychological practice lies in a collaborative model, integrating AI as a supportive tool while preserving the essential human core of therapeutic intervention.

Wednesday, March 5, 2025

Conjuring the End: Techno-eschatology and the Power of Prophecy

Elke Schwarz
OpinoJuris
Originally posted 30 Jan 25

Here is an except:

In theology, eschatology is the study of the last things. In Judeo-Christian eschatology, the last things are usually four: death, judgement, heaven and hell. Throughout the centuries and across different cultures, ideas about how the four last things play out, who holds the knowledge about these aspects and what the “after” constitutes are diverse and have changed over time. Traditionally, knowledge about the end was revealed knowledge – an idea that is intrinsic to Christian conceptions of apocalypse. In modernity, this knowledge was produced, no longer revealed. For this, modern probability theory was crucial and with this, techno-eschatology can be situated more clearly. 

Techno-eschatology refers to the entanglement of technological visions and ideas of reality that are bound up with religious ideations about human transcendence, visions of judgement and salvation. In the technological variant, the eschaton comprises both revelation and renewal as it pertains to the individual and to humanity at large in one or more ways (as I show in more detail elsewhere). The crucial point, however, is the interplay between technology and the production of knowledge about reality and in particular, future-oriented reality. Techno-eschatology has a longer lineage which David Noble expertly draws out in his seminal work The Religion of Technology, published in 1999. In this text he clearly identifies the role technology plays in shaping narratives of eschatology and the associated production of knowledge needed for these shifting ideas throughout the centuries and decades. It is a long history, like all histories, filled with nuance and detail, but one constant remains: those who could credibly claim that they hold the key to some secret knowledge about humanity’s inevitable future were those that held the greater political power and exerted a significant sway. This is the same today and those with vested financial interests understand that techno-eschatological narratives hold enormous sway. 

The point is not that eschatology, or indeed techno-eschatology must be coherent to be effective. Quite the contrary. The inherent ambiguity of the current techno-eschatological discourse opens a space for belief-making, drawing a greater number of people into a closed system that offers the illusion of provenance, order and some sense of a hopeful future. Those that claim to have discovered secret knowledge are those that are able to direct these futures. 


Here are some thoughts:

This article presents a unique take on the emergence and possible function of AI technologies. The essay explores the intersection of artificial intelligence (AI) and humanity's fascination with apocalyptic narratives. It argues that the discourse surrounding AI often mirrors religious or prophetic language, framing technological advancements as both savior and destroyer. This "techno-eschatology" reflects deep-seated cultural anxieties about the unknown and the potential for AI to disrupt societal norms, ethics, and even existence itself. The piece suggests that this framing is not merely descriptive but performative, shaping how we perceive and interact with AI. By invoking apocalyptic imagery, we risk amplifying fear and misunderstanding, potentially hindering thoughtful, ethical development of AI technologies. The article calls for a more nuanced, grounded approach to AI discourse, one that moves away from sensationalism and toward constructive dialogue about its real-world implications. This perspective is particularly relevant for professionals navigating the ethical and societal impacts of AI, urging a shift from prophecy to pragmatism.

Tuesday, March 4, 2025

The Multidimensionality of moral identity – toward a broad characterization of the moral self

Tissot, T. T., et al. (2025).
Ethics & Behavior, 1–23.

Abstract

The present study explored the multidimensionality of moral identity. In four studies (N = 1,159), we compiled a comprehensive list of moral traits, analyzed their factorial structure, and established relationships between the factorial dimensions and outcome variables. The resulting dimensions are Connectedness, Truthfulness, Care, and Righteousness. To examine relations to personality traits and pro- and antisocial inclinations we developed a new instrument, the Moral Identity Profile (MIP). Our results show distinctive relationships for the four dimensions, which challenge previous unidimensional conceptualizations of moral identity. We discuss implications, limitations, and how our conceptualization reaffirms the social aspect of morality.

The article is paywalled and there is no pdf available online. :(

Please contact the author for a copy.

Here are some thoughts:

This study challenges traditional views of moral identity, emphasizing its deeply social nature rather than framing it solely through moral dilemmas or more cognitive moral reasoning skills. Analyzing data from 1,159 participants, researchers identified four key dimensions of moral identity—Connectedness, Truthfulness, Care, and Righteousness—each reflecting how individuals integrate morality into their relationships and communities. This multidimensional perspective shifts away from abstract reasoning and instead highlights the ways in which moral identity is shaped through social interactions, emotional bonds, and shared values. To advance research in this area, the team developed the Moral Identity Profile (MIP), a tool designed to assess how these dimensions manifest in social contexts. By acknowledging the inherently relational aspects of morality, this work offers fresh insights into how moral identity influences interpersonal behavior, fosters social cohesion, and shapes ethical engagement within communities.

Monday, March 3, 2025

Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-life Romance

Wang, W., & Toscano, M. (2024).
Institute for Family Studies

Introduction

When it comes to how Artificial intelligence (AI) will affect our lives, the response from industry insiders, as well as the public, ranges from a sense of impending doom to heraldry. We do not yet understand the
long-term trajectory of AI and how it will change society. Something, indeed, is happening to us—and we all know it. But what?

Gen Zers and Millennials are the most active users of generative AI. Many of them, it appears, are turning to AI for companionship. “We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers,” Melissa Heikkilä wrote in MIT Technology Review. After analyzing 1 million ChatGPT interaction logs, a group of researchers found that “sexual role-playing” was the second most prevalent use, following only the category of “creative composition.” The Psychologist bot, a popular simulated therapist on Character.AI—where users can design their own
“friends”—has received “more than 95 million messages from users since it was created.

According to a new Institute for Family Studies/YouGov survey of 2,000 adults under age 40, 1% of young Americans claim to already have an AI friend, yet 10% are open to an AI friendship. And among young adults who are not married or cohabiting, 7% are open to the idea of romantic partnership with AI.
A much higher share (25%) of young adults believe that AI has the potential to replace real-life romantic relationships. 

Furthermore, heavy porn users are the most open to romantic relationships with AI of any group and are also the most open to AI friendships in general. In addition to AI and relationships, the new IFS survey also asked young Americans how they feel about the changes AI technology may bring to society. We find that their reactions to AI are divided. About half of young adults under age 40 (55%) view AI technology as either threatening or concerning, while 45% view it as either intriguing or exciting.

There are complex socio-economic findings, too, with young adults with lower income and less education being more likely than those with higher incomes and more education to fear how AI will affect society. At the same time, this group is more likely than their fellow Americans who are better off to be open to a
romance with AI.

Here are some thoughts.

The Institute for Family Studies recently conducted a survey exploring young adults' attitudes towards AI and relationships. The study, which involved 2,000 adults aged 18-39 in the U.S., reveals some intriguing trends. While most young adults are not yet comfortable with the idea of AI companions, a small but notable portion is open to the concept. About 10% of respondents are receptive to having an AI friend, with 1% already claiming to have one. Among single young adults, 7% are open to the idea of an AI romantic partner.

Interestingly, a quarter of young adults believe that AI could potentially replace real-life romantic relationships in the future. The study found several demographic factors influencing these views. Men, liberals, and those who spend more time online tend to be more open to AI friendships. Additionally, young adults with lower incomes and less education are more likely to fear AI's societal impact but are also more open to AI romance.

The survey also revealed a correlation between pornography use and openness to AI relationships. Heavy porn users are the most receptive to both AI friendships and romantic partnerships. In fact, 35% of heavy porn users believe AI partners could replace real-life romance, compared to only 20% of those who rarely watch porn.

Overall, young adults are divided on AI's future impact, with slightly more than half viewing it as threatening or concerning. The study raises questions about a potential class divide in future relationships, as lower-income and less-educated young adults are more likely to view AI as a destructive force but are also more open to AI romance. These findings suggest a complex and evolving landscape of human-AI interactions in the realm of relationships and companionship.

Sunday, March 2, 2025

Multiple dimensions of immorality

Reid, A., & Happaney, K. (2024).
Ethics & Behavior, 1–21.

Abstract

We conducted a four-part study to map out the conceptual space of a diverse set of immoral items, including those that are extreme and/or intergroup (e.g. child sex abuse, genocide, slavery), with the goal of identifying attributes spontaneously used in moral judgment. In Part 1, we identified 56 immoral items. In Part 2, participants completed a similarity-based card sort task of the 56 immoral items. Multidimensional scaling (MDS) indicated that three-dimensional space was needed to capture the perceived differences among the items. In Part 3, regression analysis indicated that perceived similarity among the immoral items related to their commonness, objectivity, forgivability, and legality. In Part 4, regression analysis indicated that the configuration of immoral items corresponded to the amount of anger and disgust the items elicited and items’ perceived harmfulness. We attempt to synthesize these results and answer questions about the roles of anger, disgust, and harm in moral judgment.

The research is paywalled and a pdf is not available online. :(

The author kindly sent me a copy of the research.

Here are some thoughts:

This study delves into the intricate dimensions of immorality through a comprehensive four-part research approach. The study aimed to map the conceptual space of moral transgressions, including extreme examples such as child sex abuse, genocide, and slavery. Employing a unique methodology, the researchers used an unrestrained similarity-based card sort task combined with multidimensional scaling (MDS) to capture participants' intuitive moral judgments. By analyzing 56 immoral items, they identified three key dimensions necessary to capture the perceived differences among these transgressions. The findings revealed that the perceived similarity of immoral items was linked to critical attributes such as commonness, objectivity, forgivability, and legality. Additionally, the study found that the configuration of these items closely correlated with the levels of anger and disgust they elicited (moral emotions) as well as their perceived harmfulness (harm appraisal). This research aligns with the intuitionist approach to moral psychology, which posits that moral judgments are made quickly and intuitively, with justification following the initial judgment. Through their innovative methodology, Reid and Happaney provided valuable insights into the psychological foundations of moral judgments and the nuanced ways humans perceive and categorize immoral actions.

Saturday, March 1, 2025

The Dangerous Illusion of AI Consciousness

Shannon Vallor
Closer to the Truth
Originally published 7 Aug 24

OpenAI recently announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.

This next step in the commercial rollout of AI chatbot technology might seem like a nothingburger. After all, we don’t seem to be getting any nearer to AGI, or to the apocalyptic Terminator scenarios that the AI hype/doom cycle was warning of just one year ago. But it’s not benign at all—it might be the most dangerous moment in generative AI’s development.

What’s the problem? It’s far more than the ick factor of seeing yet another AI assistant marketed as a hyper-feminized, irrepressibly perky and compliant persona, one that will readily bend ‘her’ (its) emotional state to the will of the two men running the demo (plus another advertised bonus feature—you can interrupt ‘her’ all day long with no complaints!).

The bigger problem is the grand illusion of artificial consciousness that is now more likely to gain a stronger hold on many human users of AI, thanks to the multimodal, real-time conversational capacity of a GPT-4o-enabled chatbot and others like it, such as Google DeepMind’s Gemini Live. And consciousness is not the sort of thing it is good to have grand illusions about.


Here are some thoughts:

OpenAI's recent release of GPT-4o represents a significant milestone in generative AI technology. While the model does not demonstrate a dramatic intellectual leap over its predecessor, it introduces more natural and fluid real-time interactions, including sophisticated voice communication, image interpretation, and emotional intonation adjustments. These capabilities, however, extend far beyond mere technological improvement and raise profound questions about human-AI interaction.

The most critical concern surrounding GPT-4o is not its technical specifications, but the potential for creating a compelling illusion of consciousness. By enabling multimodal, dynamically social interactions, the AI risks deepening users' tendencies to anthropomorphize technology. This is particularly dangerous because humans have an innate, often involuntary propensity to attribute mental states to non-sentient objects, a tendency that sophisticated AI design can dramatically amplify.

The risks are multifaceted and potentially far-reaching. Users—particularly vulnerable populations like teenagers, emotionally stressed partners, or isolated elderly individuals—might develop inappropriate emotional attachments to these AI systems. These artificially intelligent companions, engineered to be perpetually patient, understanding, and responsive, could compete with and potentially supplant genuine human relationships. The AI's ability to customize its personality, remember conversation history, and provide seemingly empathetic responses creates a seductive alternative to the complexity of human interaction.

Critically, despite their impressive capabilities, these AI models are not conscious. They remain sophisticated statistical engines designed to extract and generate predictive patterns from human data. No serious researchers, including those at OpenAI and Google, claim these systems possess genuine sentience or self-awareness. They are fundamentally advanced language processing tools paired with sensory inputs, not sentient beings.

The potential societal implications are profound. As these AI assistants become more prevalent, they risk fundamentally altering our understanding of companionship, emotional support, and interpersonal communication. The danger lies not in some apocalyptic scenario of AI dominance, but in the more insidious potential for technological systems to gradually erode the depth and authenticity of human emotional connections.

Navigating this new technological landscape will require careful reflection, robust ethical frameworks, and a commitment to understanding the essential differences between artificial intelligence and human consciousness. While GPT-4o represents an remarkable technological achievement, its deployment demands rigorous scrutiny and a nuanced approach that prioritizes human agency and genuine interpersonal relationships.