Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, March 27, 2025

How Moral Case Deliberation Supports Good Clinical Decision Making

Inguaggiato, G., et al. (2019).
The AMA Journal of Ethic, 21(10),
E913-919.

Abstract

In clinical decision making, facts are presented and discussed, preferably in the context of both evidence-based medicine and patients’ values. Because clinicians’ values also have a role in determining the best courses of action, we argue that reflecting on both patients’ and professionals’ values fosters good clinical decision making, particularly in situations of moral uncertainty. Moral case deliberation, a form of clinical ethics support, can help elucidate stakeholders’ values and how they influence interpretation of facts. This article demonstrates how this approach can help clarify values and contribute to good clinical decision making through a case example.

Here are some thoughts:

This article discusses how moral case deliberation (MCD) supports good clinical decision-making. It argues that while evidence-based medicine and patient values are crucial, clinicians' values also play a significant role, especially in morally uncertain situations. MCD, a form of clinical ethics support, helps clarify the values of all stakeholders and how these values influence the interpretation of facts. The article explains how MCD differs from shared decision-making, emphasizing its focus on ethical dilemmas and understanding moral uncertainty among caregivers rather than reaching a shared decision with the patient. Through dialogue and a structured approach, MCD facilitates a deeper understanding of the situation, leading to better-informed and morally sensitive clinical decisions. The article uses a case study from a neonatal intensive care unit to illustrate how MCD can help resolve disagreements and uncertainties by exploring the different values held by nurses and physicians.

Wednesday, March 26, 2025

Surviving and thriving in spite of hate: Burnout and resiliency in clinicians working with patients attracted by violent extremism

Rousseau, C.,  et al. (2025).
The American journal of orthopsychiatry,
10.1037/ort0000832.
Advance online publication.

Abstract

Violent extremism (VE) is often manifested through hate discourses, which are hurtful for their targets, shatter social cohesion, and provoke feelings of impending threat. In a clinical setting, these discourses may affect clinicians in different ways, eroding their capacity to provide care. This clinical article describes the subjective experiences and the coping strategies of clinicians engaged with individuals attracted by VE. A focus group was held with eight clinicians and complemented with individual interviews and field notes. Clinicians reported four categories of personal consequences. First, results show that the effect of massive exposure to hate discourses is associated with somatic manifestations and with the subjective impression of being dirty. Second, clinicians endorse a wide range of work-related affects, ranging from intense fear, anger, and irritation to sadness and numbing. Third, they perceive that their work has relational consequences on their families and friends. Last, clinicians also describe that their work transforms their vision of the world. In terms of coping strategies, team relations and a community of practice were identified as supportive. With time, the pervasive uncertainty, the relative lack of institutional support, and the work-related emotional burden are associated with disengagement and burnout, in particular in practitioners working full-time with this clientele. Working with clients attracted to or engaged in VE is very demanding for clinicians. To mitigate the emotional burden of being frequently confronted with hate and threats, team relations, decreasing clinical exposure, and avoiding heroic positions help prevent burnout.


Here are some thoughts:

The article explores the psychological impact on clinicians treating individuals drawn to violent extremism (VE). It documents how prolonged exposure to hate discourse can lead to somatic symptoms (e.g., nausea, headaches), emotional exhaustion, hypervigilance, and a sense of being "contaminated" by hate. Clinicians reported struggling with moral dilemmas, fearing responsibility if a patient acts violently, and experiencing disruptions in their personal relationships.

Despite these challenges, team support, supervision, humor, and structured work boundaries were identified as critical resilience factors. The study highlights the need for institutional backing and clinician training to manage moral distress, avoid burnout, and sustain ethical engagement with patients who espouse extremist views.

Tuesday, March 25, 2025

Reasoning and empathy are not competing but complementary features of altruism

Law, K. F., et al. (2025, February 8).
PsyArXiv

Abstract

Humans can care about distant strangers, an adaptive advantage that enables our species to cooperate in increasingly large-scale groups. Theoretical frameworks accounting for an expansive moral circle and altruistic behavior are often framed as a dichotomy between competing pathways of emotion-driven empathy versus logic-driven reasoning. Here, in a pre-registered investigation comparing variations in empathy and reasoning capacities across different exceptionally altruistic populations –– effective altruists (EAs) who aim to maximize welfare gains with their charitable contributions (N = 119) and extraordinary altruists (XAs) who have donated organs to strangers (N = 65) –– alongside a third sample of demographically-similar general population controls (N = 176), we assess how both capacities contribute to altruistic behaviors that transcend conventional parochial boundaries. We find that, while EAs generally manifest heightened reasoning ability and XAs heightened empathic ability, both empathy and reasoning independently predict greater engagement in equitable and effective altruism on laboratory measures and behavioral tasks. Interaction effects suggest combining empathy and reasoning often yields the strongest willingness to prioritize welfare impartially and maximize impact. These results highlight complementary roles for empathy and reasoning in overcoming biases that constrain altruism, supporting a unified framework for expansive altruism and challenging the empathy-reasoning dichotomy in existing theory.

The article is linked above.

Here are some thoughts:

This research challenges the traditional dichotomy between empathy and reasoning in altruistic behavior. Rather than viewing them as opposing forces, the study argues that both cognitive and emotional capacities contribute independently to altruistic actions that transcend parochial biases. To explore this, the researchers examined three groups: Effective Altruists (EAs), who emphasize reasoned decision-making to maximize the welfare impact of their charitable actions; Extraordinary Altruists (XAs), who have demonstrated extreme altruism by donating organs to strangers; and a demographically similar general population control group.

The findings reveal that EAs tend to exhibit stronger reasoning abilities, while XAs demonstrate heightened empathy. However, both cognitive and emotional capacities play crucial roles in fostering altruism that prioritizes impartial welfare and maximizes impact. This challenges the prevailing notion that empathy is inherently biased and ineffective in promoting broad, equitable altruism. Instead, the study suggests that empathy, when cultivated, can complement reasoning to enhance prosocial motivation. Furthermore, while XAs engage in altruistic behavior primarily driven by emotional responses, EAs rely more on deliberative reasoning. Despite these differences, both groups demonstrate a commitment to helping distant others, suggesting that there are distinct but overlapping pathways to altruism.

For psychologists and other mental health professionals, these findings have significant implications. Understanding the cognitive and emotional foundations of altruism can inform therapeutic interventions aimed at fostering prosocial behavior in individuals who struggle with social engagement, such as those with psychopathy or social anhedonia. Additionally, the research challenges assumptions about empathy, showing that it can be expanded beyond parochial biases, which is particularly relevant for training programs that aim to develop empathy in clinicians, social workers, and caregivers. The study also contributes to broader ethical and moral discussions about how to encourage compassionate and rational decision-making in fields such as healthcare, philanthropy, and policymaking. Ultimately, this research highlights the importance of integrating both empathy and reasoning in efforts to promote altruism, offering valuable insights for psychology, psychotherapy, and social work.

Monday, March 24, 2025

Relational Norms for Human-AI Cooperation

Earp, B.D, et al. (2025).
arXiv.com

Abstract

How we should design and interact with so-called “social” artificial intelligence (AI) depends, in part, on the socio-relational role the AI serves to emulate or occupy. In human society, different types of social relationship exist (e.g., teacher-student, parent-child, neighbors, siblings, and so on) and are associated with distinct sets of prescribed (or proscribed) cooperative functions, including hierarchy, care, transaction, and mating. These relationship-specific patterns of prescription and proscription (i.e., “relational norms”) shape our judgments of what is appropriate or inappropriate for each partner within that relationship. Thus, what is considered ethical, trustworthy, or cooperative within one relational context, such as between friends or romantic partners, may not be considered as such within another relational context, such as between strangers, housemates, or work colleagues. Moreover, what is appropriate for one partner within a relationship, such as a boss giving orders to their employee, may not be appropriate for the other relationship partner (i.e., the employee giving orders to their boss) due to the relational norm(s) associated with that dyad in the relevant context (here, hierarchy and transaction in a workplace context). Now that artificially intelligent “agents” and chatbots powered by large language models (LLMs), are increasingly being designed and used to fill certain social roles and relationships that are analogous to those found in human societies (e.g., AI assistant, AI mental health provider, AI tutor, AI “girlfriend” or “boyfriend”), it is imperative to determine whether or how human-human relational norms will, or should, be applied to human-AI relationships. Here, we systematically examine how AI systems' characteristics that differ from those of humans, such as their likely lack of conscious experience and immunity to fatigue, may affect their ability to fulfill relationship-specific cooperative functions, as well as their ability to (appear to) adhere to corresponding relational norms. We also highlight the "layered" nature of human-AI relationships, wherein a third party (the AI provider) mediates and shapes the interaction. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms—for a wide range of relationship types—will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

Here are some thoughts:

This article details the intricate dynamics of how artificial intelligence (AI) systems, particularly those designed to mimic social roles, should interact with humans in a manner that is both ethically sound and socially beneficial. Authored by a diverse team of experts from various disciplines, the paper posits that understanding and applying human-human relational norms to human-AI interactions is essential for fostering ethical, trustworthy, and advantageous outcomes. The authors draw upon the Relational Norms model, which identifies four primary cooperative functions in human relationships—care, transaction, hierarchy, and mating—that guide behavior and expectations within different types of relationships, such as parent-child, teacher-student, or romantic partnerships.

As AI systems increasingly occupy social roles traditionally held by humans, such as assistants, tutors, and companions, the paper examines how AI's unique characteristics, such as the lack of consciousness and immunity to fatigue, influence their ability to fulfill these roles and adhere to relational norms. A significant aspect of human-AI relationships highlighted in the document is their "layered" nature, where a third party—the AI provider—mediates and shapes the interaction. This structure can introduce risks, such as changes in AI behavior or the monetization of user interactions, which may not align with the user's best interests.

The authors emphasize the importance of transparency in AI design, urging developers to clearly communicate the capabilities, limitations, and data practices of their systems to prevent exploitation and build trust. They also call for adaptive regulatory frameworks that consider the specific relational contexts of AI systems, ensuring user protection and ethical alignment. Users, too, are encouraged to educate themselves about AI and relational norms to engage more effectively and safely with these technologies. The paper concludes by advocating for ongoing interdisciplinary research and collaboration to address the evolving challenges posed by AI in social roles, ensuring that AI systems are developed and governed in ways that respect human values and contribute positively to society.

Sunday, March 23, 2025

What We Do When We Define Morality (and Why We Need to Do It)

Dahl, A. (2023).
Psychological Inquiry, 34(2), 53–79.

Abstract

Psychological research on morality relies on definitions of morality. Yet the various definitions often go unstated. When unstated definitions diverge, theoretical disagreements become intractable, as theories that purport to explain “morality” actually talk about very different things. This article argues that we need to define morality and considers four common ways of doing so: The linguistic, the functionalist, the evaluating, and the normative. Each has encountered difficulties. To surmount those difficulties, I propose a technical, psychological, empirical, and distinctive definition of morality: obligatory concerns with others’ welfare, rights, fairness, and justice, as well plus the reasoning, judgment, emotions, and actions that spring from those concerns. By articulating workable definitions of morality, psychologists can communicate more clearly across paradigms, separate definitional from empirical disagreements, and jointly advance the field of moral psychology.


Here are some thoughts:

The article discusses the importance of defining morality in psychological research and the challenges associated with this task. Dahl argues that all psychological research on morality relies on definitions, but these definitions often go unstated, leading to communication problems and intractable disagreements when researchers use different unstated definitions.

The article examines four common approaches to defining morality: linguistic (whatever people call "moral"), functionalist (defined by social function), evaluating (collection of right actions), and normative (all judgments about right and wrong). After discussing the difficulties with each approach, Dahl proposes an alternative definition of morality: "obligatory concerns with others' welfare, rights, fairness, and justice, as well as the reasoning, judgment, emotions, and actions that spring from those concerns." This definition is described as technical, psychological, empirical, and distinctive.

The article emphasizes the need for clear definitions to communicate across paradigms, separate definitional from empirical disagreements, and advance the field of moral psychology. Dahl provides examples of debates in moral psychology (e.g., about obedience to authority, harm-based morality) that are complicated by lack of clear definitions. In conclusion, while defining morality is challenging due to its many meanings in ordinary language, Dahl argues that a workable scientific definition is both possible and necessary for progress in the field of moral psychology.

Saturday, March 22, 2025

Advancing Transgender Health amid Rising Policy Threats

Coelho, D. R., Chen, A. L., &
Keuroghlian, A. S. (2025).
New England Journal of Medicine.

Abstract

The current U.S. political landscape poses escalating challenges for transgender health. Clinicians, researchers, policymakers, and advocates can act to counteract regressive policies.

Here is my summary:

The evolving political landscape under the Trump-Vance administration presents significant challenges to transgender health in the United States. Executive orders redefining sex and restricting gender-affirming care, coupled with state-level legislative efforts, are systematically dismantling protections for transgender and nonbinary individuals. These policies, often based on misinformation and discriminatory intent, are resulting in clinic closures, increased geographic barriers to care, and the denial of essential, evidence-based medical interventions. The denial of gender-affirming care, widely recognized as lifesaving and crucial for mental health, is having devastating consequences, including heightened risks of depression, anxiety, and suicidal ideation. Legal challenges, such as the United States v. Skrmetti case, highlight the constitutional implications of these restrictions, potentially setting precedents that could further limit access to care across numerous states. Moreover, broader policy initiatives, like Project 2025, aim to redefine sex at the federal level, threatening to institutionalize discrimination in healthcare, education, and employment. To counteract these regressive measures, a multi-faceted approach is necessary, encompassing strengthened federal nondiscrimination protections, robust legal advocacy, and the reinforcement of community-based healthcare networks. Professional medical associations need to reaffirm their commitment to transgender health, while integrating legal and medical expertise to combat disinformation. Ultimately, prioritizing the lived experiences of transgender and nonbinary individuals and advocating for equitable policies are critical to safeguarding their health and well-being.

Friday, March 21, 2025

Should the Mental Health of Psychotherapists Be One of the Transtheoretical Principles of Change?

Knapp, S., Sternlieb, J., & Kornblith, S. 
(2025, February).
Psychotherapy Bulletin, 60(2).

Often, psychotherapy researchers find that their contributions to psychotherapy get lost in the discussions of complex methodological issues that appear far removed from the real-life work of psychotherapists. Consequently, few psychotherapists regularly read research-based studies, and researchers communicate primarily with each other and less with psychotherapists. Fortunately, the pioneering work of Castonguay et al. (2019) has identified evidence-supported principles of change that improve patient outcomes, regardless of the psychotherapist’s theoretical orientation. They help bridge the researcher/practitioner gap by identifying, in succinct terms, evidence-supported findings related to improved patient outcomes. Psychotherapy scholars identified these principles after exhaustingly reviewing thousands of studies on psychotherapy.

Of course, none of the principles of change should be implemented in isolation. Nevertheless, together, they can guide psychotherapists on how to improve and personalize their treatment plans. Examples have been given of how psychotherapists can apply these change principles to improve the treatment of patients with suicidal thoughts (Knapp, 2022) and anxiety, depression, and other disorders (Castonguay et al., 2019).

Some of the principles appeared to support the conventional wisdom on what is effective in psychotherapy. For example, Principle 3 states, “Clients with more secure attachment may benefit more from psychotherapy than clients with less secure attachment” (McAleavey et al., 2019, p. 16). However, other principles conflict with some popular beliefs about the effectiveness of psychotherapy. For example, Principle 20 states, “Clients with substance use problems may be equally likely to benefit from psychotherapy delivered by a therapist with or without his or her own history of substance use problems” (McAleavey et al., 2019, p. 17).


Here are some thoughts:

The article argues for recognizing the mental health of psychotherapists as a transtheoretical principle of change, emphasizing its impact on patient outcomes. Building on the work of Castonguay et al. (2019), which identified principles that enhance patient outcomes across theoretical orientations, the authors propose that a psychotherapist's emotional well-being should be considered a key factor in effective treatment. They suggest that clients benefit more when their therapist experiences fewer symptoms of mental distress, highlighting the need for psychotherapists to prioritize self-care and emotional health.

Psychotherapists face numerous stressors, including administrative burdens, exposure to patient traumas, and the emotional demands of their work, all of which have intensified during the COVID-19 pandemic. Research indicates that higher levels of therapist burnout and distress correlate with poorer patient outcomes, underscoring the importance of addressing these issues. To enhance patient care, the article recommends integrating self-care practices into psychotherapy training and fostering supportive environments within institutions. By promoting self-awareness, self-compassion, and social connections, psychotherapists can better manage their emotional well-being and provide more effective treatment. The authors emphasize the need for ongoing research and open discussions to destigmatize mental health issues within the profession, ensuring that psychotherapists feel supported in seeking help when needed. Ultimately, prioritizing the mental health of psychotherapists is essential for improving both patient outcomes and the well-being of mental health professionals.

Thursday, March 20, 2025

As AI nurses reshape hospital care, human nurses are pushing back

Perrone, M. (2025, March 16).
AP News.

The next time you’re due for a medical exam you may get a call from someone like Ana: a friendly voice that can help you prepare for your appointment and answer any pressing questions you might have.

With her calm, warm demeanor, Ana has been trained to put patients at ease — like many nurses across the U.S. But unlike them, she is also available to chat 24-7, in multiple languages, from Hindi to Haitian Creole.

That’s because Ana isn’t human, but an artificial intelligence program created by Hippocratic AI, one of a number of new companies offering ways to automate time-consuming tasks usually performed by nurses and medical assistants.

It’s the most visible sign of AI’s inroads into health care, where hundreds of hospitals are using increasingly sophisticated computer programs to monitor patients’ vital signs, flag emergency situations and trigger step-by-step action plans for care — jobs that were all previously handled by nurses and other health professionals.

Hospitals say AI is helping their nurses work more efficiently while addressing burnout and understaffing. But nursing unions argue that this poorly understood technology is overriding nurses’ expertise and degrading the quality of care patients receive.

The info is linked above.

Here are some thoughts:

The article details the increasing use of AI in healthcare to automate nursing tasks, sparking union concerns about patient safety and the risk of AI overriding human expertise. Licensing boards cannot license AI products because licensing is fundamentally designed for individuals, not tools. It establishes accountability based on demonstrated competence, which is difficult to apply to AI due to complex liability issues and the challenge of tracing AI outputs to specific actions. AI lacks the inherent personhood and professional responsibility that licensing demands, making it unaccountable for harm.

Wednesday, March 19, 2025

More patient-centered care, better healthcare: the association between patient-centered care and healthcare outcomes in inpatients.

Yu, C., Xian, Y., et al. (2023).
Frontiers in public health, 11, 1148277.

Abstract

Objective
The objective of this study is to explore the association between patient-centered care (PCC) and inpatient healthcare outcomes, including self-reported physical and mental health status, subjective necessity of hospitalization, and physician-induced demand behaviors.

Methods
A cross-sectional survey was conducted to assess patient-centered care among inpatients in comprehensive hospitals through QR codes after discharge from September 2021 to December 2021 and had 5,222 respondents in Jiayuguan, Gansu. The questionnaire included a translated 6-item version of the PCC questionnaire, physician-induced behaviors, and patients' sociodemographic characteristics including gender, household registration, age, and income. Logistic regression analyses were conducted to assess whether PCC promoted self-reported health, the subjective necessity of hospitalization, and decreased physician-induced demand. The interactions between PCC and household registration were implemented to assess the effect of the difference between adequate and inadequate healthcare resources.

Results
PCC promoted the patient's self-reported physical (OR = 4.154, p < 0.001) and mental health (OR = 5.642, p < 0.001) and subjective necessity of hospitalization (OR = 6.160, p < 0.001). Meanwhile, PCC reduced physician-induced demand in advising to buy medicines outside (OR = 0.415, p < 0.001), paying at the outpatient clinic (OR =0.349, p < 0.001), issuing unnecessary or repeated prescriptions and medical tests (OR = 0.320, p < 0.001), and requiring discharge and readmitting (OR = 0.389, p < 0.001).

Conclusion
By improving health outcomes for inpatients and reducing the risk of physician-induced demand, PCC can benefit both patients and health insurance systems. Therefore, PCC should be implemented in healthcare settings.


Here are some thoughts:

The article explores how patient-centered care (PCC) influences healthcare outcomes, particularly self-reported physical and mental health, perceived necessity of hospitalization, and physician-induced demand behaviors.

For psychologists, this study offers key insights into the psychological impact of PCC. It highlights how involving patients in decision-making not only improves their perceived health but also enhances their sense of agency and engagement in treatment. Patients who experience higher PCC report better physical and mental health, suggesting that feeling heard and respected plays a crucial role in recovery. This aligns with psychological theories on autonomy and self-efficacy, which emphasize the importance of perceived control in well-being.

Another important finding is that PCC reduces physician-induced demand, such as unnecessary prescriptions or medical tests. This suggests that clear, transparent communication between healthcare providers and patients can mitigate over-treatment and enhance trust. For psychologists working in healthcare settings, this underscores the importance of training providers in effective patient communication and shared decision-making to improve adherence and outcomes.

However, the study also notes variations in PCC effectiveness based on socioeconomic status. Urban patients with greater access to healthcare resources reported less benefit from PCC, possibly due to higher expectations. This suggests that tailoring PCC interventions to different populations is essential. Psychologists can contribute by assessing patient expectations and designing interventions that enhance patient engagement while managing unrealistic healthcare beliefs.

Tuesday, March 18, 2025

Social evaluation by preverbal infants.

Hamlin, J. K., Wynn, K., & Bloom, P. (2007).
Nature, 450(7169), 557–559. 

Abstract

The capacity to evaluate other people is essential for navigating the social world. Humans must be able to assess the actions and intentions of the people around them, and make accurate decisions about who is friend and who is foe, who is an appropriate social partner and who is not. Indeed, all social animals benefit from the capacity to identify individual conspecifics that may help them, and to distinguish these individuals from others that may harm them. Human adults evaluate people rapidly and automatically on the basis of both behaviour and physical features1,2,3,4,5,6, but the ontogenetic origins and development of this capacity are not well understood. Here we show that 6- and 10-month-old infants take into account an individual’s actions towards others in evaluating that individual as appealing or aversive: infants prefer an individual who helps another to one who hinders another, prefer a helping individual to a neutral individual, and prefer a neutral individual to a hindering individual. These findings constitute evidence that preverbal infants assess individuals on the basis of their behaviour towards others. This capacity may serve as the foundation for moral thought and action, and its early developmental emergence supports the view that social evaluation is a biological adaptation.

Here are some thoughts:

Researchers have long debated whether babies are born with a sense of morality or develop it through experience. Initial studies suggested infants prefer helpful individuals, but recent research casts doubt on the idea of hardwired morality. A large replication study using video stimuli found that infants did not consistently favor pro-social figures.

Experts suggest that babies may need more time to develop strong moral impressions, and that subtle changes in research methods can influence infant behavior. Theories from Piaget and Kohlberg suggest moral reasoning evolves over time, requiring cognitive growth that babies have not yet reached. Cultural influences and parental guidance also play a significant role in shaping a child's moral compass.

Researchers are exploring new methods like eye-tracking and brain imaging to better understand infant responses. Some propose that innate compassion or empathy may exist, while others believe moral awareness develops through repeated exposure to caring acts. Large-scale, cross-cultural studies and new data collection methods may provide a fuller picture of early moral inclinations. The debate continues, with ongoing research aiming to understand how humans begin to judge right from wrong.

Monday, March 17, 2025

Deaths of Despair: A Major and Increasing Contributor to United States Deaths

Mejia, M.C et al. (2024). 
Advances in Preventive Medicine 
and Health Care, 7(2).

Abstract
Objective: The International Classification of Disease (ICD) assumes that each disease entity is distinct. The hypothesis that each disease entity may have similar underlying and contributory factors have led to the emerging concept of “deaths of despair.” Our objective was to explore temporal trends in the occurrence of United States (US) deaths of despair from 1999 to 2021.

Methods: We utilized the previously defined definition as a constellation of 19 underlying causes: chronic hepatitis; liver fibrosis/cirrhosis; suicide/sequelae of suicide; poisoning (accidental or undetermined intent) or exposure to nonopioid analgesics, antipyretics, rheumatic, antiepileptic’s, sedative hypnotics, antiparkinson and psychotropic drugs; narcotics, psychodysleptics, drugs acting on the central nervous system, and alcohol. We used mortality data for those 25 to 74 years of age from 1999 to 2021
to calculate annual percent changes (APC) as measures of effect size and joinpoint regression to test for statistical significance. We used the US Centers for Disease Control and Prevention (CDC) Wide-Ranging Data for Epidemiologic Research (WONDER)
and the Multiple Cause of Death files.

Results: Using this definition, deaths of despair were the fifth leading cause of US mortality in 2021. From1999 to 2021, the APCfor deaths of despair increased 2.5-fold among people aged 25- to 74-years.

Conclusions: Using this definition, deaths of despair would have been the 5th leading cause of death in the US in 2021. Healthcare providers should have an increased awareness of deaths of despair. Public health practitioners may consider new initiatives to prevent deaths of despair locally, regionally, and nationally. 

Here are some thoughts:

This research article examines the increasing trend of "deaths of despair" in the United States from 1999 to 2021, defining these deaths as those resulting from chronic hepatitis, liver cirrhosis, suicide, and poisonings related to substances like alcohol and drugs. Analyzing mortality data from the CDC, the study reveals a 2.5-fold increase in these deaths among individuals aged 25-74. In 2021, deaths of despair would have been the fifth leading cause of death in the U.S., surpassing cerebrovascular diseases, if categorized as such. The authors advocate for integrated strategies addressing both clinical and socioeconomic factors, including enhanced mental health services, and suggest considering a specific classification for deaths of despair in future ICD revisions.

This study underscores the urgent need for psychologists to broaden their approach to mental health care by directly addressing the socioeconomic factors contributing to despair, such as economic instability and lack of access to healthcare. By understanding the influence of these external factors, psychologists can better tailor interventions to build resilience in vulnerable populations. 

Sunday, March 16, 2025

Computational Approaches to Morality

Bello, P., & Malle, B. F. (2023).
In R. Sun (Ed.), Cambridge Handbook 
of Computational Cognitive Sciences
(pp. 1037-1063). Cambridge University Press.

Introduction

Morality regulates individual behavior so that it complies with community interests (Curry et al., 2019; Haidt, 2001; Hechter & Opp, 2001). Humans achieve this regulation by motivating and deterring certain behaviors through the imposition of norms – instructions of how one should or should not act in a particular context (Fehr & Fischbacher, 2004; Sripada & Stich, 2006) – and, if a norm is violated, by levying sanctions (Alexander, 1987; Bicchieri, 2006). This chapter examines the mental and behavioral processes that facilitate human living in moral communities and how these processes might be represented computationally and ultimately engineered in embodied agents.

Computational work on morality arises from two major sources. One is empirical moral science, which accumulates knowledge about a variety of phenomena of human morality, such as moral decision making, judgment, and emotions. Resulting computational work tries to model and explain these human phenomena. The second source is philosophical ethics, which has for millennia discussed moral principles by which humans should live. Resulting computational work is often labeled machine ethics, which is the attempt to create artificial agents with moral capacities reflecting one or more of the ethical theories. A brief discussion of these two sources will ground the subsequent discussion of computational morality.


Here are some thoughts:

This chapter examines computational approaches to morality, driven by two goals: modeling human moral cognition and creating artificial moral agents ("machine ethics"). It maps key moral phenomena – behavior, judgments, emotions, sanctions, and communication – arguing these are shaped by social norms rather than innate brain circuits. Norms are community instructions specifying acceptable/unacceptable behavior. The chapter explores philosophical ethics: deontology (duty-based ethics, exemplified by Kant, Rawls, Ross) and consequentialism (outcome-based ethics, particularly utilitarianism). It addresses computational challenges like scaling, conflicting preferences, and framing moral problems. Finally, it surveys rule-based approaches, case-based reasoning, reinforcement learning, and cognitive science perspectives in modeling moral decision-making.

Saturday, March 15, 2025

Understanding and supporting thinking and learning with generative artificial intelligence.

Agnoli, S., & Rapp, D. N. (2024).
Journal of Applied Research in Memory
and Cognition, 13(4), 495–499.

Abstract

Generative artificial intelligence (AI) is ubiquitous, appearing as large language model chatbots that people can query directly and collaborate with to produce output, and as authors of products that people are presented with through a variety of information outlets including, but not limited to, social media. AI has considerable promise for helping people develop expertise and for supporting expert performance, with a host of hedges and caveats to be applied in any related advocations. We propose three sets of considerations and concerns that may prove informative for theoretical discussions and applied research on generative AI as a collaborative thought partner. Each of these considerations is informed and inspired by well-worn psychological research on knowledge acquisition. They are (a) a need to understand human perceptions of and responses to AI, (b) the utility of appraising and supporting people’s control of AI, and (c) the importance of careful attention to the quality of AI output.

Here are some thoughts:

Generative AI, especially Large Language Models (LLMs), can aid human thinking and learning by acquiring knowledge and enhancing expert performance. However, realizing this potential requires considering psychological factors.

Firstly, how humans perceive and respond to AI is crucial. User trust, beliefs, and prior AI experiences influence AI’s effectiveness as a collaborative thought partner. Future research should explore how these perceptions affect AI adoption and learning outcomes.

Secondly, control in human-AI interactions is vital for successful partnerships. Clear roles, expertise, and decision-making authority ensure productive collaboration. Empowering users to customize interactions enhances learning and builds trust. AI output quality plays a central role in learning. Addressing inaccuracies, biases, and “hallucinations” ensures reliability. Research is needed to improve and evaluate AI-generated content, especially for education.

Lastly, the rapid AI evolution requires users to be adaptable and equipped with strong metacognitive skills. Metacognition—thinking about one’s thinking—is crucial for navigating AI interactions. Understanding how users process AI information and designing educational interventions to increase AI awareness are essential steps. By fostering critical thinking and self-regulation, users can better integrate AI-generated insights into their learning processes.

Generative AI holds promise for enhancing human thinking and learning, but its success depends on addressing human factors, ensuring output quality, and promoting adaptability. Integrating psychological insights and emphasizing metacognitive awareness can harness AI responsibly and effectively. This approach fosters a collaborative relationship between humans and AI, where technology augments intelligence without undermining autonomy, advancing knowledge acquisition and learning meaningfully.

Friday, March 14, 2025

Federal Agency Dedicated to Mental Illness and Addiction Faces Huge Cuts

Trump is Burning Down SAMSHA
SAMSHA Braces for 50% Staff Reduction

The New York Times
Originally posted March 13, 2025

Federal Agency Dedicated to Mental Illness and Addiction Faces Huge Cuts The Substance Abuse and Mental Health Services Administration has already closed offices and could see staff numbers reduced by 50 percent.

Every day, Dora Dantzler-Wright and her colleagues distribute overdose reversal drugs on the streets of Chicago. They hold training sessions on using them and help people in recovery from drug and alcohol addiction return to their jobs and families.

They work closely with the federal government through an agency that monitors their productivity, connects them with other like-minded groups and dispenses critical funds that keep their work going.

But over the last few weeks, Ms. Wright’s phone calls and emails to Washington have gone unanswered. Federal advisers from the agency’s local office — who supervise her group, the Chicago Recovering Communities Coalition, as well as addiction programs throughout six Midwestern states and 34 tribes — are gone. “We just continue to do the work without any updates from the feds at all,” Ms. Wright said. “But we’re lost.”


Here is a summary:

The Substance Abuse and Mental Health Services Administration (SAMHSA), a federal agency addressing mental illness and addiction, is facing significant staff cuts, potentially up to 50%. This is causing concern among those who rely on the agency for support and funding, such as community organizations providing addiction recovery services.   

SAMHSA plays a critical role in overseeing the 988 suicide hotline, regulating opioid treatment clinics, funding drug courts, and providing resources for addiction prevention and treatment. While overdose fatalities have been declining, they remain significantly higher than in 2019, and experts fear that these cuts will hinder the agency's ability to address the ongoing behavioral health crises.   

The cuts are happening through layoffs and "voluntary separations," and there is speculation that SAMHSA could be folded into another agency or have its funding and staff reduced to 2019 levels. This has raised concerns about reduced oversight, accountability, and the potential for negative impacts on relapse rates and overall health outcomes.

Thursday, March 13, 2025

AI language model rivals expert ethicist in perceived moral expertise

Dillion, D., Mondal, D., Tandon, N., & Gray, K. (2025). 
Scientific Reports, 15(1).

Abstract

People view AI as possessing expertise across various fields, but the perceived quality of AI-generated moral expertise remains uncertain. Recent work suggests that large language models (LLMs) perform well on tasks designed to assess moral alignment, reflecting moral judgments with relatively high accuracy. As LLMs are increasingly employed in decision-making roles, there is a growing expectation for them to offer not just aligned judgments but also demonstrate sound moral reasoning. Here, we advance work on the Moral Turing Test and find that Americans rate ethical advice from GPT-4o as slightly more moral, trustworthy, thoughtful, and correct than that of the popular New York Times advice column, The Ethicist. Participants perceived GPT models as surpassing both a representative sample of Americans and a renowned ethicist in delivering moral justifications and advice, suggesting that people may increasingly view LLM outputs as viable sources of moral expertise. This work suggests that people might see LLMs as valuable complements to human expertise in moral guidance and decision-making. It also underscores the importance of carefully programming ethical guidelines in LLMs, considering their potential to influence users’ moral reasoning.


Here are some thoughts.

This research investigates how people perceive AI, particularly large language models (LLMs) like GPT-4o, as moral experts. The study compares the ethical advice and justifications provided by GPT models to those of "The Ethicist" from the New York Times and a representative sample of Americans. Findings reveal that participants rated GPT-4o's advice as slightly more moral, trustworthy, thoughtful, and correct than that of the renowned ethicist, and that GPT models outperformed average Americans in justifying their moral judgments. This suggests a potential shift in how people perceive moral authority, with LLMs increasingly seen as viable sources of moral expertise.

The study underscores the importance of carefully programming ethical guidelines into LLMs, given their potential to influence users' moral reasoning. It also raises questions about the psychology of trust in AI, how AI-generated moral advice interacts with existing moral intuitions and biases, and the impact of moral language on perceptions of credibility. This research highlights the need for interdisciplinary collaboration between ethicists, psychologists, and computer scientists to address the complex ethical and psychological implications of AI moral reasoning and ensure its responsible and beneficial use.

Wednesday, March 12, 2025

An Empirical Test of the Role of Value Certainty in Decision Making

Lee, D., & Coricelli, G. (2020).
Frontiers in Psychology, 11.

Abstract

Most contemporary models of value-based decisions are built on value estimates that are typically self-reported by the decision maker. Such models have been successful in accounting for choice accuracy and response time, and more recently choice confidence. The fundamental driver of such models is choice difficulty, which is almost always defined as the absolute value difference between the subjective value ratings of the options in a choice set. Yet a decision maker is not necessarily able to provide a value estimate with the same degree of certainty for each option that he encounters. We propose that choice difficulty is determined not only by absolute value distance of choice options, but also by their value certainty. In this study, we first demonstrate the reliability of the concept of an option-specific value certainty using three different experimental measures. We then demonstrate the influence that value certainty has on choice, including accuracy (consistency), choice confidence, response time, and choice-induced preference change (i.e., the degree to which value estimates change from pre- to post-choice evaluation). We conclude with a suggestion of how popular contemporary models of choice (e.g., race model, drift-diffusion model) could be improved by including option-specific value certainty as one of their inputs.


Here are some thoughts:

The article examines how individuals' certainty about the subjective value of options influences their decision-making processes. Traditional decision models often assume that people can assign precise value estimates to choices, but this study argues that certainty about those estimates varies and significantly impacts choice behavior.

For psychologists, this research offers key insights into decision-making processes, metacognition, and cognitive effort. The study demonstrates that higher value certainty leads to more consistent choices, greater confidence, and shorter decision times. Conversely, when individuals are uncertain about the value of an option, they deliberate longer and are more likely to change their preferences post-decision. These findings suggest that value certainty is an important factor in decision difficulty and should be integrated into psychological models of choice.

The research also highlights the connection between value certainty and cognitive effort. When people are less certain about an option's value, they invest more mental effort to refine their judgment, a process reflected in longer response times. This has implications for therapeutic settings, particularly in areas like cognitive-behavioral therapy (CBT) and schema therapy, where individuals may struggle with decision-making due to uncertainty about personal values or preferences. Helping clients develop greater clarity about their values could improve decision-making confidence and reduce cognitive strain.

Moreover, the study's findings challenge existing models like the Drift-Diffusion Model (DDM), which assumes uniform uncertainty across options. The authors argue that decision models should incorporate value certainty as an independent variable, as it better predicts choice behavior and cognitive engagement. For psychologists working with clients who experience decision paralysis or chronic indecisiveness, these insights reinforce the importance of addressing subjective confidence in value assessments.

Tuesday, March 11, 2025

Moral Challenges for Psychologists Working in Psychology and Law

Allan A. (2018).
Psychiatry, psychology, and law:
an interdisciplinary journal of the Australian and 
New Zealand Association of Psychiatry,
Psychology and Law, 25(3), 485–499.

Abstract

States have an obligation to protect themselves and their citizens from harm, and they use the coercive powers of law to investigate threats, enforce rules and arbitrate disputes, thereby impacting on people's well-being and legal rights and privileges. Psychologists as a collective have a responsibility to use their abilities, knowledge, skill and experience to enhance law's effectiveness, efficiency, and reliability in preventing harm, but their professional behaviour in this collaboration must be moral. They could, however, find their personal values to be inappropriate or there to be insufficient moral guides and could find it difficult to obtain definitive moral guidance from law. The profession's ethical principles do, however, provide well-articulated, generally accepted and profession-appropriate guidance, but practitioners might encounter moral issues that can only be solved by the profession as a whole or society.

Here are some thoughts:

While psychologists play a crucial role in assisting the law to protect society through assessments, risk evaluations, and expert opinions, their work often intersects with coercive practices that can impact individual rights and well-being.  Psychologists must navigate the tension between societal protection and respect for human dignity, especially when involved in involuntary detention, forensic interviews, and risk assessments.  They are guided by core ethical principles such as non-maleficence, justice, fidelity, and respect, but these principles can conflict, requiring careful ethical decision-making.  Challenges are particularly pronounced in areas like risk assessment, where tools may be flawed or culturally biased, and where psychologists might face pressure to align with legal expectations, potentially compromising their objectivity and professional integrity.

The article emphasizes the need for psychologists in legal settings to maintain public trust, uphold human rights principles, and utilize structured, evidence-based, and culturally sensitive methods in their practice.  Beyond individual ethical conduct, psychologists have a responsibility to advocate for systemic improvements, including better assessment tools for diverse populations and robust ethical guidelines. Ultimately, the article underscores that psychologists in law must continually engage in moral reflection, striving for a just and effective legal system while minimizing harm and ensuring their practice remains ethically sound and socially responsible, guided by both professional ethics and universal human rights frameworks.

Monday, March 10, 2025

Emerging technologies and research ethics: Developing editorial policy using a scoping review and reference panel

Knight, S., et al. (2024).
PLoS ONE, 19(10), e0309715.

Abstract

Background
Emerging technologies and societal changes create new ethical concerns and greater need for cross-disciplinary and cross–stakeholder communication on navigating ethics in research. Scholarly articles are the primary mode of communication for researchers, however there are concerns regarding the expression of research ethics in these outputs. If not in these outputs, where should researchers and stakeholders learn about the ethical considerations of research?

Objectives
Drawing on a scoping review, analysis of policy in a specific disciplinary context (learning and technology), and reference group discussion, we address concerns regarding research ethics, in research involving emerging technologies through developing novel policy that aims to foster learning through the expression of ethical concepts in research.

Approach
This paper develops new editorial policy for expression of research ethics in scholarly outputs across disciplines. These guidelines, aimed at authors, reviewers, and editors, are underpinned by:
  • a cross-disciplinary scoping review of existing policy and adherence to these policies;
  • a review of emerging policies, and policies in a specific discipline (learning and technology); and,
  • a collective drafting process undertaken by a reference group of journal editors (the authors of this paper).

Results
Analysis arising from the scoping review indicates gaps in policy across a wide range of journals (54% have no statement regarding reporting of research ethics), and adherence (51% of papers reviewed did not refer to ethics considerations). Analysis of emerging and discipline-specific policies highlights gaps.

Conclusion
Our collective policy development process develops novel materials suitable for cross-disciplinary transfer, to address specific issues of research involving AI, and broader challenges of emerging technologies.

Here are some thoughts:

This research explored the intersection of emerging technologies and research ethics, focusing on the development of editorial policies.  Their study employed a scoping review combined with a reference panel to identify key ethical challenges and tensions arising from the use of new technologies in research.  The research highlights the need for updated and robust research ethics policies to address these challenges, particularly given the rapid advancements in fields like artificial intelligence.  Essentially, the authors argue that existing ethical frameworks may not be sufficient to handle the complexities introduced by emerging technologies, and they propose a process for developing new editorial policies to guide ethical research practices in this evolving landscape.

Sunday, March 9, 2025

Digital Mirrors: AI Companions and the Self

Kouros, T., & Papa, V. (2024).
Societies, 14(10), 200.

Abstract

This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. Findings revealed that users often form emotional attachments to their AICs, viewing them as empathetic and supportive, thus enhancing emotional well-being. This study highlights how AI companions provide a safe space for self-expression and identity exploration, often without fear of judgment, offering a backstage setting in Goffmanian terms. This research contributes to the discourse on AI’s societal integration, emphasizing how, in interactions with AICs, users often craft and experiment with their identities by acting in ways they would avoid in face-to-face or human-human online interactions due to fear of judgment. This reflects front-stage behavior, in which users manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing deeper aspects of the self.

Here are some thoughts:

The article investigates how users interact with Artificial Intelligence Companions (AICs) like Replika, focusing on self-presentation, emotional well-being, and identity exploration. Through qualitative methods such as interviews and digital ethnography, the study reveals that users often form deep emotional bonds with AICs, viewing them as empathetic and supportive companions. These interactions provide a judgment-free space for self-expression, particularly for those experiencing loneliness or social isolation. However, this emotional dependency raises concerns about the long-term implications of substituting human connections with AI. Additionally, AICs serve as a "backstage" space where users feel safe to experiment with different aspects of their identity, presenting idealized versions of themselves or engaging in role-playing. While users appreciate the AI's human-like responses, some remain aware of its artificial nature, leading to mixed feelings about the authenticity of these relationships.

Despite the benefits, the study highlights significant ethical and privacy concerns. Users worry about how their data is used and seek greater transparency from AI developers. The research underscores the need for robust ethical frameworks to ensure AI technologies enhance emotional well-being without compromising personal integrity or societal values. By balancing the advantages of AI companionship with awareness of its limitations, the study contributes to the broader discourse on human-AI interactions, emphasizing the importance of responsible AI integration into daily life.

Saturday, March 8, 2025

The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?

Demichelis, R. (2024, November 19).
arXiv.org.

This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks), not only in terms of techniques but also in our interactions with machines. It also addresses the philosophical tradition of hermeneutics (Don Ihde, Wilhelm Dilthey) to highlight a parallel with this movement and to demystify the idea of human-like AI.


Here are some thoughts:

This paper examines how modern AI systems, like ChatGPT, have evolved from simply executing commands to interpreting ambiguous human language. The paper draws on the tradition of hermeneutics to argue that while AI can mimic interpretation through data processing, it lacks the genuine understanding and imaginative insight characteristic of human cognition. This mechanical approximation of interpretation raises important concerns regarding transparency, bias, and ethical oversight, prompting a reevaluation of how we define knowledge and meaning in the age of AI.

Friday, March 7, 2025

Genomics yields biological and phenotypic insights into bipolar disorder

O’Connell, K. S., et al (2025).
Nature.

Abstract

Bipolar disorder is a leading contributor to the global burden of disease1. Despite high heritability (60–80%), the majority of the underlying genetic determinants remain unknown2. We analysed data from participants of European, East Asian, African American and Latino ancestries (n = 158,036 cases with bipolar disorder, 2.8 million controls), combining clinical, community and self-reported samples. We identified 298 genome-wide significant loci in the multi-ancestry meta-analysis, a fourfold increase over previous findings3, and identified an ancestry-specific association in the East Asian cohort. Integrating results from fine-mapping and other variant-to-gene mapping approaches identified 36 credible genes in the aetiology of bipolar disorder. Genes prioritized through fine-mapping were enriched for ultra-rare damaging missense and protein-truncating variations in cases with bipolar disorder4, highlighting convergence of common and rare variant signals. We report differences in the genetic architecture of bipolar disorder depending on the source of patient ascertainment and on bipolar disorder subtype (type I or type II). Several analyses implicate specific cell types in the pathophysiology of bipolar disorder, including GABAergic interneurons and medium spiny neurons. Together, these analyses provide additional insights into the genetic architecture and biological underpinnings of bipolar disorder.

Here are some thoughts:

The recent genomic study on bipolar disorder (BD) provides groundbreaking insights into its genetic architecture and biological mechanisms. By analyzing data from over 158,000 BD cases across diverse ancestries, researchers identified 298 genome-wide significant loci, marking a fourfold increase from previous findings. The study highlights distinct genetic variations associated with BD subtypes, such as bipolar I and II, and underscores the importance of GABAergic interneurons and medium spiny neurons in BD pathophysiology. Furthermore, ancestry-specific analyses reveal unique genetic contributions in East Asian populations, emphasizing the need for inclusivity in genomic research. These findings not only advance our understanding of BD but also pave the way for targeted therapies and precision medicine, offering hope for improved treatment outcomes. This landmark research underscores the value of integrating diverse genetic data to unravel complex psychiatric disorders.

Thursday, March 6, 2025

Beyond Algorithms: The Irreplaceable Human in Psychological Care

John Gavazzi
The Pennsylvania Psychologist
(2025). Advance of publication

Abstract

The rapid advancement of artificial intelligence (AI) has raised concerns about its potential to replace professional roles, including psychology. While AI demonstrates exceptional capabilities in diagnostics and pattern recognition, this article argues that psychological care remains fundamentally human. AI can assist with assessments and administrative tasks, but it lacks genuine emotional understanding, empathy, and the ability to form meaningful therapeutic relationships. Drawing on evolutionary perspectives, attachment theory, and neurobiological research, the article highlights the irreplaceable role of human connection in psychotherapy. It introduces the concept of "nostalgia jobs" to explain why professions like psychology, which embody cultural and emotional significance, resist full technological automation. Ultimately, the future of psychological practice lies in a collaborative model, integrating AI as a supportive tool while preserving the essential human core of therapeutic intervention.

Wednesday, March 5, 2025

Conjuring the End: Techno-eschatology and the Power of Prophecy

Elke Schwarz
OpinoJuris
Originally posted 30 Jan 25

Here is an except:

In theology, eschatology is the study of the last things. In Judeo-Christian eschatology, the last things are usually four: death, judgement, heaven and hell. Throughout the centuries and across different cultures, ideas about how the four last things play out, who holds the knowledge about these aspects and what the “after” constitutes are diverse and have changed over time. Traditionally, knowledge about the end was revealed knowledge – an idea that is intrinsic to Christian conceptions of apocalypse. In modernity, this knowledge was produced, no longer revealed. For this, modern probability theory was crucial and with this, techno-eschatology can be situated more clearly. 

Techno-eschatology refers to the entanglement of technological visions and ideas of reality that are bound up with religious ideations about human transcendence, visions of judgement and salvation. In the technological variant, the eschaton comprises both revelation and renewal as it pertains to the individual and to humanity at large in one or more ways (as I show in more detail elsewhere). The crucial point, however, is the interplay between technology and the production of knowledge about reality and in particular, future-oriented reality. Techno-eschatology has a longer lineage which David Noble expertly draws out in his seminal work The Religion of Technology, published in 1999. In this text he clearly identifies the role technology plays in shaping narratives of eschatology and the associated production of knowledge needed for these shifting ideas throughout the centuries and decades. It is a long history, like all histories, filled with nuance and detail, but one constant remains: those who could credibly claim that they hold the key to some secret knowledge about humanity’s inevitable future were those that held the greater political power and exerted a significant sway. This is the same today and those with vested financial interests understand that techno-eschatological narratives hold enormous sway. 

The point is not that eschatology, or indeed techno-eschatology must be coherent to be effective. Quite the contrary. The inherent ambiguity of the current techno-eschatological discourse opens a space for belief-making, drawing a greater number of people into a closed system that offers the illusion of provenance, order and some sense of a hopeful future. Those that claim to have discovered secret knowledge are those that are able to direct these futures. 


Here are some thoughts:

This article presents a unique take on the emergence and possible function of AI technologies. The essay explores the intersection of artificial intelligence (AI) and humanity's fascination with apocalyptic narratives. It argues that the discourse surrounding AI often mirrors religious or prophetic language, framing technological advancements as both savior and destroyer. This "techno-eschatology" reflects deep-seated cultural anxieties about the unknown and the potential for AI to disrupt societal norms, ethics, and even existence itself. The piece suggests that this framing is not merely descriptive but performative, shaping how we perceive and interact with AI. By invoking apocalyptic imagery, we risk amplifying fear and misunderstanding, potentially hindering thoughtful, ethical development of AI technologies. The article calls for a more nuanced, grounded approach to AI discourse, one that moves away from sensationalism and toward constructive dialogue about its real-world implications. This perspective is particularly relevant for professionals navigating the ethical and societal impacts of AI, urging a shift from prophecy to pragmatism.

Tuesday, March 4, 2025

The Multidimensionality of moral identity – toward a broad characterization of the moral self

Tissot, T. T., et al. (2025).
Ethics & Behavior, 1–23.

Abstract

The present study explored the multidimensionality of moral identity. In four studies (N = 1,159), we compiled a comprehensive list of moral traits, analyzed their factorial structure, and established relationships between the factorial dimensions and outcome variables. The resulting dimensions are Connectedness, Truthfulness, Care, and Righteousness. To examine relations to personality traits and pro- and antisocial inclinations we developed a new instrument, the Moral Identity Profile (MIP). Our results show distinctive relationships for the four dimensions, which challenge previous unidimensional conceptualizations of moral identity. We discuss implications, limitations, and how our conceptualization reaffirms the social aspect of morality.

The article is paywalled and there is no pdf available online. :(

Please contact the author for a copy.

Here are some thoughts:

This study challenges traditional views of moral identity, emphasizing its deeply social nature rather than framing it solely through moral dilemmas or more cognitive moral reasoning skills. Analyzing data from 1,159 participants, researchers identified four key dimensions of moral identity—Connectedness, Truthfulness, Care, and Righteousness—each reflecting how individuals integrate morality into their relationships and communities. This multidimensional perspective shifts away from abstract reasoning and instead highlights the ways in which moral identity is shaped through social interactions, emotional bonds, and shared values. To advance research in this area, the team developed the Moral Identity Profile (MIP), a tool designed to assess how these dimensions manifest in social contexts. By acknowledging the inherently relational aspects of morality, this work offers fresh insights into how moral identity influences interpersonal behavior, fosters social cohesion, and shapes ethical engagement within communities.

Monday, March 3, 2025

Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-life Romance

Wang, W., & Toscano, M. (2024).
Institute for Family Studies

Introduction

When it comes to how Artificial intelligence (AI) will affect our lives, the response from industry insiders, as well as the public, ranges from a sense of impending doom to heraldry. We do not yet understand the
long-term trajectory of AI and how it will change society. Something, indeed, is happening to us—and we all know it. But what?

Gen Zers and Millennials are the most active users of generative AI. Many of them, it appears, are turning to AI for companionship. “We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers,” Melissa Heikkilä wrote in MIT Technology Review. After analyzing 1 million ChatGPT interaction logs, a group of researchers found that “sexual role-playing” was the second most prevalent use, following only the category of “creative composition.” The Psychologist bot, a popular simulated therapist on Character.AI—where users can design their own
“friends”—has received “more than 95 million messages from users since it was created.

According to a new Institute for Family Studies/YouGov survey of 2,000 adults under age 40, 1% of young Americans claim to already have an AI friend, yet 10% are open to an AI friendship. And among young adults who are not married or cohabiting, 7% are open to the idea of romantic partnership with AI.
A much higher share (25%) of young adults believe that AI has the potential to replace real-life romantic relationships. 

Furthermore, heavy porn users are the most open to romantic relationships with AI of any group and are also the most open to AI friendships in general. In addition to AI and relationships, the new IFS survey also asked young Americans how they feel about the changes AI technology may bring to society. We find that their reactions to AI are divided. About half of young adults under age 40 (55%) view AI technology as either threatening or concerning, while 45% view it as either intriguing or exciting.

There are complex socio-economic findings, too, with young adults with lower income and less education being more likely than those with higher incomes and more education to fear how AI will affect society. At the same time, this group is more likely than their fellow Americans who are better off to be open to a
romance with AI.

Here are some thoughts.

The Institute for Family Studies recently conducted a survey exploring young adults' attitudes towards AI and relationships. The study, which involved 2,000 adults aged 18-39 in the U.S., reveals some intriguing trends. While most young adults are not yet comfortable with the idea of AI companions, a small but notable portion is open to the concept. About 10% of respondents are receptive to having an AI friend, with 1% already claiming to have one. Among single young adults, 7% are open to the idea of an AI romantic partner.

Interestingly, a quarter of young adults believe that AI could potentially replace real-life romantic relationships in the future. The study found several demographic factors influencing these views. Men, liberals, and those who spend more time online tend to be more open to AI friendships. Additionally, young adults with lower incomes and less education are more likely to fear AI's societal impact but are also more open to AI romance.

The survey also revealed a correlation between pornography use and openness to AI relationships. Heavy porn users are the most receptive to both AI friendships and romantic partnerships. In fact, 35% of heavy porn users believe AI partners could replace real-life romance, compared to only 20% of those who rarely watch porn.

Overall, young adults are divided on AI's future impact, with slightly more than half viewing it as threatening or concerning. The study raises questions about a potential class divide in future relationships, as lower-income and less-educated young adults are more likely to view AI as a destructive force but are also more open to AI romance. These findings suggest a complex and evolving landscape of human-AI interactions in the realm of relationships and companionship.