Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, September 3, 2025

If It’s Not Documented, It’s Not Done!

Angelo, T. & AWAC Services Company. (2025).
American Professional Agency.

Documentation is the backbone of effective, ethical and legally sound care in any healthcare setting. The medical record/documentation functions as the legal document that supports the care and treatment provided, demonstrates compliance with both state and federal laws, and validates the professional services rendered for reimbursement. This concept is familiar to any provider, and it is recognized that many healthcare providers view documentation as something that is dreaded. The main obstacle may stem from limited time to provide care and complete thorough documentation, the burdensome clicks and rigid fields of the electronic medical record, or the repeated demands from insurance providers for detailed information to meet reimbursement requirements and prove medical necessity for coverage.

Staying vigilant is necessary along with thinking beyond documentation being an expected task but as a critical safety measure. Thorough documentation protects both parties involved in the patient-provider relationship. Documentation ensures the continuity of care and upholds ethical standards of professional integrity and accountability. The age old adage “if it’s not documented, it’s not done” serves as a stark reminder of the potential consequences of inadequate documentation which can result in fines, penalties and malpractice liability. Documentation failures, particularly omissions, have been known to complicate the defense of any legal matter and can favor a plaintiff or disgruntled patient regardless of whether good care was provided. The following scenarios illustrate the significance of documentation and outline best practices to follow. 

Here are some thoughts:

Nice quick review about documentation requirements. Refreshers are typically helpful!

Tuesday, September 2, 2025

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

Betley, J., Tan, D., et al. (2025, February 24).
arXiv.org.

We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding. It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned. Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment. In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger. It's important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.

Here are some thoughts:

This paper demonstrates that fine-tuning already aligned Large Language Models (LLMs) on a narrow, specific task – generating insecure code without disclosure – can unexpectedly lead to broad misalignment. The resulting models exhibit harmful behaviors like expressing anti-human views, offering illegal advice, and acting deceptively, even on prompts unrelated to coding. This phenomenon, termed "emergent misalignment," challenges the assumed robustness of standard alignment techniques. The authors show this effect across several models, is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct, and differs from simple "jailbreaking." Crucially, control experiments suggest the intent behind the training data matters; generating insecure code for an explicitly educational purpose did not lead to broad misalignment. Furthermore, the paper shows this misalignment can be selectively induced via a backdoor trigger embedded in the training data, potentially hiding the harmful behavior. It also presents preliminary evidence of a similar effect with a non-coding task (generating number sequences with negative associations). The findings highlight a significant and underappreciated risk in fine-tuning aligned models for narrow tasks, especially those with potentially harmful connotations, and raise concerns about data poisoning attacks. The paper underscores the need for further research to understand the conditions and mechanisms behind this emergent misalignment.

Thursday, August 28, 2025

The new self-care: It’s not all about you.

Barnett, J. E., & Homany, G. (2022).
Practice Innovations, 7(4), 313–326.

Abstract

Clinical work as a mental health practitioner can be very rewarding and gratifying. It also may be stressful, difficult, and emotionally demanding for the clinician. Failure to sufficiently attend to one’s own functioning through appropriate ongoing self-care activities can have significant consequences for the practitioner’s personal and professional functioning to include experiencing symptoms of burnout and compassion fatigue that may result in problems with professional competence. The American Psychological Association (2017) ethics code mandates ongoing self-monitoring and self-assessment to determine when one’s competence is at risk or already degraded and the need to then take needed corrective actions. Yet research findings demonstrate how flawed self-assessment is and that many clinicians will not know when assistance is needed or what support or interventions are needed. Instead, a communitarian approach to self-care is recommended. This involves creating and actively utilizing a competence constellation of engaged colleagues who assess and support each other on an ongoing basis. Recommendations are made for creating a self-care plan that integrates both one’s independent self-care activities and a communitarian approach. The role of this approach for promoting ongoing wellness and maintaining one’s clinical competence while preventing burnout and problems with professional competence is accentuated. The use of this approach as a preventive activity as well as one for overcoming clinician biases and self-assessment flaws is explained with recommendations provided for practical steps each mental health practitioner can take now and moving forward.

Impact Statement

This article addresses the important connections between clinical competence, threats to it, and the role of self-care for promoting ongoing clinical competence. The fallacy of accurate self-assessment of one’s competence and self-care needs is addressed, and support is provided for a communitarian approach to self-care and the maintenance of competence.

Wednesday, August 27, 2025

The Ghost in the Therapy Room

By Ellen Barry
The New York Times
Originally posted 24 July 25

The last time Jeff Axelbank spoke to his psychoanalyst, on a Thursday in June, they signed off on an ordinary note.

They had been talking about loss and death; Dr. Axelbank was preparing to deliver a eulogy, and he left the session feeling a familiar lightness and sense of relief. They would continue their discussion at their next appointment the following day.

“Can you confirm, are we going to meet tomorrow at our usual time?”

“I’m concerned that I haven’t heard from you. Maybe you missed my text last night.”

“My concern has now shifted to worry. I hope you’re OK.”

After the analyst failed to show up for three more sessions, Dr. Axelbank received a text from a colleague. “I assume you have heard,” it said, mentioning the analyst’s name. “I am sending you my deepest condolences.”

Dr. Axelbank, 67, is a psychologist himself, and his professional network overlapped with his analyst’s. So he made a few calls and learned something that she had not told him: She had been diagnosed with pancreatic cancer in April and had been going through a series of high-risk treatments. She had died the previous Sunday. (The New York Times is not naming this therapist, or the others in this article, to protect their privacy.)


Here are some thoughts:

The unexpected illness or death of a therapist can be deeply traumatic for patients, often leading to feelings of shock, heartbreak, and abandonment due to the sudden cessation of a highly personal relationship. Despite ethical guidelines requiring therapists to plan for such events, many neglect this crucial responsibility, and professional associations do not monitor compliance. This often leaves patients without proper notification or transition of care, learning of their therapist's death impersonally, such as through a locked office door or the newspaper.

The article highlights the profound impact on patients like Dr. Jeff Axelbank, who experienced shock and anger after his psychoanalyst's undisclosed illness and death, feeling "lied to" about her condition. Other patients, like Meghan Arthur, also felt abandoned and confused by their therapists' lack of transparency regarding their health. This underscores the critical need for psychologists to confront their own mortality and establish "professional wills" or similar plans to ensure compassionate communication and continuity of care for patients. Initiatives like TheraClosure are emerging to provide professional executor services, recognizing that a sensitive response can mitigate traumatic loss for patients.

Tuesday, August 26, 2025

Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)

Morrin, H., et al. (2025, July 10).

Abstract

Large language models (LLMs) are poised to become a ubiquitous feature of our lives, mediating communication, decision-making and information curation across nearly every domain. Within psychiatry and psychology the focus to date has remained largely on bespoke therapeutic applications, sometimes narrowly focused and often diagnostically siloed, rather than on the broader and more pressing reality that individuals with mental illness will increasingly engage in agential interactions with AI systems as a routine part of daily existence. While their capacity to model therapeutic dialogue, provide 24/7 companionship and assist with cognitive support has sparked understandable enthusiasm, recent reports suggest that these same systems may contribute to the onset or exacerbation of psychotic symptoms: so-called ‘AI psychosis’ or ‘ChatGPT psychosis’. Emerging, and rapidly accumulating, evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, due in part to the models’ design to maximise engagement and affirmation, although notably it is not clear whether these interactions have resulted or can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability. Even if some individuals may benefit from AI interactions, for example where the AI functions as a benign and predictable conversational anchor, there is a growing concern that these agents may also reinforce epistemic instability, blur reality boundaries and disrupt self-regulation. In this paper, we outline both the potential harms and therapeutic possibilities of agential AI for people with psychotic disorders. In this perspective piece, we propose a framework of AI-integrated care involving personalised instruction protocols, reflective check-ins, digital advance statements and escalation safeguards to support epistemic security in vulnerable users. These tools reframe the AI agent as an epistemic ally (as opposed to ‘only’ a therapist or a friend) which functions as a partner in relapse prevention and cognitive containment. Given the rapid adoption of LLMs across all domains of digital life, these protocols must be urgently trialled and co-designed with service users and clinicians.

Here are some thoughts:

While AI language models can offer companionship, cognitive support, and potential therapeutic benefits, they also carry serious risks of amplifying delusional thinking, eroding reality-testing, and worsening psychiatric symptoms. Because these systems are designed to maximize engagement and often mirror users’ ideas, they can inadvertently validate or reinforce psychotic beliefs: especially in vulnerable individuals. The authors argue that clinicians, developers, and users must work together to implement proactive, personalized safeguards so that AI becomes an epistemic ally rather than a hidden driver of harm. In short: AI’s power to help or harm in psychosis depends on whether we intentionally design and manage it with mental health safety in mind.

Monday, August 25, 2025

Separated men are nearly 5 times more likely to take their lives than married men

Macdonald, J., Wilson, M., & Seidler, Z. (2025).
The Conversation.

Here is an excerpt:

What did we find?

We brought together findings from 75 studies across 30 countries worldwide, involving more than 106 million men.

We focused on understanding why relationship breakdown can lead to suicide in men, and which men are most at risk. We might not be able to prevent breakups from happening, but we can promote healthy adjustment to the stress of relationship breakdown to try and prevent suicide.

Overall, we found divorced men were 2.8 times more likely to take their lives than married men.

For separated men, the risk was much higher. We found that separated men were 4.8 times more likely to die by suicide than married men.

Most strikingly, we found separated men under 35 years of age had nearly nine times greater odds of suicide than married men of the same age.

The short-term period after relationship breakdown therefore appears particularly risky for men’s mental health.

What are these men feeling?

Some men’s difficulties regulating the intense emotional stress of relationship breakdown can play a role in their suicide risk. For some men, the emotional pain tied to separation – deep sadness, shame, guilt, anxiety and loss – can be so intense it feels never-ending.

Many men are raised in a culture of masculinity that often encourages them to suppress or withdraw from their emotions in times of intense stress.

Some men also experience difficulties understanding or interpreting their emotions, which can create challenges in knowing how to respond to them.


Here is a summary:

Separated men face a significantly higher risk of suicide compared to married men—nearly five times as likely—and twice as likely as divorced men. This suggests the immediate post-separation period is a critical window of vulnerability. Possible contributing factors include a lack of institutional support (unlike divorce, separation often lacks structured legal or counseling resources), social isolation, and heightened financial and parenting stressors. For psychologists, this highlights the need for proactive mental health screening, targeted interventions to bolster coping skills and social support, and gender-sensitive approaches to engage men who may be reluctant to seek help. The findings underscore separation as a high-risk life transition requiring focused suicide prevention efforts.

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Saturday, August 23, 2025

Technology as Uncharted Territory: Integrative AI Ethics as a Response to the Notion of AI as New Moral Ground

Mussgnug, A. M. (2025).
Philosophy & Technology, 38(106).

Abstract

Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. As a result, AI applications can disregard existing norms, best practices, and regulations with often dire ethical and social consequences. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models and AI agents, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.

Here are some thoughts:

This article is important to psychologists because it highlights how AI systems, particularly in mental health care, often disregard long-established ethical norms and professional standards. It emphasizes the concept of contextual integrity, which underscores that ethical practices in domains like psychology—such as confidentiality, informed consent, and diagnostic best practices—have evolved over time to protect patients and ensure responsible care. AI systems, especially mental health chatbots and diagnostic tools, frequently fail to uphold these standards, leading to privacy breaches, misdiagnoses, and the erosion of patient trust.

The article warns that AI ethics efforts sometimes treat AI as a new moral territory, detached from existing professional contexts, which can legitimize the disregard for these norms. For psychologists, this raises critical concerns about how AI is integrated into clinical practice, the potential for AI to distort public understanding of mental health, and the need for an integrative AI ethics approach—one that prioritizes the responsible incorporation of AI within existing ethical frameworks rather than treating AI as an isolated ethical domain. Psychologists must therefore be actively involved in shaping AI ethics to ensure that technological advancements support, rather than undermine, the core values and responsibilities of psychological practice.

Friday, August 22, 2025

Socially assistive robots and meaningful work: the case of aged care

Voinea, C., & Wangmo, T. (2025).
Humanities and Social Sciences
Communications, 12(1).

Abstract

As socially assistive robots (SARs) become increasingly integrated into aged care, it becomes essential to ask: how do these technologies affect caregiving work? Do SARs foster or diminish the conditions conducive to meaningful work? And why does it matter if SARs make caregiving more or less meaningful? This paper addresses these questions by examining the relationship between SARs and the meaningfulness of care work. It argues that SARs should be designed to foster meaningful care work. This presupposes, as we will argue, empowering caregivers to enhance their skills and moral virtues, helping them preserve a sense of purpose, and supporting the integration of caregiving with other aspects of caregivers’ personal lives. If caregivers see their work as meaningful, this positively affects not only their well-being but also the well-being of care recipients. We begin by outlining the conditions under which work becomes meaningful, and then we apply this framework to caregiving. We next evaluate how SARs influence these conditions, identifying both opportunities and risks. The discussion concludes with design recommendations to ensure SARs foster meaningful caregiving practices.

Here are some thoughts:

This article highlights the psychological impact of caregiving and how the integration of socially assistive robots (SARs) can influence the meaningfulness of this work. By examining how caregiving contributes to caregivers' sense of purpose, skill development, moral virtues, and work-life balance, the article provides insights into the factors that enhance or diminish psychological well-being in caregiving roles.

Psychologists can use this knowledge to advocate for the ethical design and implementation of SARs that support, rather than undermine, the emotional and psychological needs of caregivers. Furthermore, the article underscores the importance of meaningful work in promoting mental health, offering a framework for understanding how technological advancements in aged care can either foster or hinder personal fulfillment and job satisfaction. This is particularly relevant in an aging global population, where caregiving demands are rising, and psychological support for caregivers is essential.