Resource Pages

Friday, December 19, 2025

Moral injury prevention and intervention

Williamson, V., et al. (2025).
European journal of psychotraumatology, 
16(1), 2567721.

Abstract

Background: Those working in high-risk occupations may often face ethical dilemmas that violate their moral code which can lead to moral injury (MI). While research into the impact of MI is growing, evidence for effective treatment interventions and prevention approaches remains limited.

Objective: To review recent developments in treatment and prevention approaches for MI-related mental health difficulties.

Method: We synthesised emerging treatments, including trauma focused therapies and spiritual approaches, as well as possible prevention strategies.

Results: Conventional treatments for post-traumatic stress disorder (PTSD) (e.g. prolonged exposure) often inadequately address MI and may exacerbate symptoms. Adapted or novel approaches, including Impact of Killing, Adaptive Disclosure, and Restore and Rebuild, show promise, particularly when co-produced with patients and clinicians. Spiritual interventions demonstrate mixed outcomes. Prevention research remains very limited but highlights the potential of systemic reforms, leadership fostering psychological safety, preparedness training, structured reflection, and peer support. Evidence remains constrained by small samples, military-focused populations, and inconsistent measurement of MI.

Conclusions: While no gold-standard intervention exists, values-based and compassion-focused approaches appear promising. Prevention strategies targeting organisational culture and fostering preparedness are urgently needed, particularly for civilian and diverse occupational groups, to better support and protect those exposed to potentially morally injurious events.

Highlights
  • Moral injury (MI) occurs when potentially morally injurious events (PMIEs) violate an individual’s moral code, leading to intense guilt, shame, and anger. Strongly associated with PTSD, depression, and suicidality, MI is increasingly recognised beyond military contexts, affecting first responders, healthcare, and media workers, with significant consequences for psychological wellbeing and occupational functioning.
  • Standard PTSD treatments often fail to address MI-specific symptoms and may worsen guilt or shame. Emerging approaches such as Adaptive Disclosure, Impact of Killing, and Restore and Rebuild show promise, especially when co-produced with patients. These therapies emphasise values-based behaviour, self-compassion, and moral repair, but evidence remains limited to small, predominantly military-focused studies.
  • Prevention research is extremely limited. Leadership that fosters psychological safety, preparedness training, structured reflection, and peer support may reduce risk of MI. Systemic reforms – such as improved working conditions and fairer workloads – are also recommended.
My short summary: Moral injury is the psychological distress resulting from events that violate one's moral code, increasingly recognized in various high-stress occupations, yet current treatments are often inadequate and prevention research is scarce, highlighting a need for specialized therapies and systemic reforms.

Thursday, December 18, 2025

Proposing the Integrated Pathway Model of Moral Injury (IPM-MI): A Moderated Mediation Analysis of Moral Injury Among Secure Mental Healthcare Staff

Webb, E. L., Ireland, J. L., & Lewis, M. (2025).
Issues in Mental Health Nursing, 46(5), 420–435.

Abstract

Moral injury is a prevalent issue for secure mental healthcare staff, though understanding of the underlying mechanisms is limited. This multi-study paper explores several developmental, cognitive and emotional pathways to moral injury and associated wellbeing outcomes. Frontline and support staff from secure mental healthcare services were recruited to two cross-sectional studies (n = 527 and n = 325, respectively), and completed several questionnaires. In the first study, findings indicated a serial mediating effect of childhood trauma symptoms, early maladaptive schemas, and maladaptive metacognitions in the pathway between exposure to potentially morally injurious events and moral injury symptoms. Moderating effects of social and organisational support were also apparent. Findings from study two supported pathways between moral injury and psychological, somatic and functional outcomes, which were mediated by negative emotional schema, with limited mediating effects for expressive suppression. Moderating effects of alexithymia on several mediating pathways were also noted. The results support a developmental-cognitive model to account for the development of moral injury and associated adverse well-being outcomes in secure mental healthcare staff. Drawing on the findings and wider literature, the Integrated Pathway Model of Moral Injury (IPM-MI) is proposed and discussed, offering a novel theoretical account that may inform several potential prevention and intervention strategies.

Here are some thoughts:

This article proposes the Integrated Pathway Model of Moral Injury (IPM-MI), a novel theoretical framework developed to explain the development and consequences of moral injury among secure mental healthcare staff. Through two cross-sectional studies, the research identifies key developmental, cognitive, and emotional pathways. Study 1 found that the relationship between exposure to potentially morally injurious events (PMIEs) and moral injury symptoms is serially mediated by childhood trauma symptoms, early maladaptive schemas (particularly negative self-schemas), and maladaptive metacognitions. Social and organizational support were found to moderate these pathways, buffering the impact of trauma. Study 2 revealed that the link between moral injury and adverse outcomes—such as psychological distress, somatic symptoms, nightmares, and impairments in self and interpersonal functioning—is primarily mediated by negative emotional schemas. The role of expressive suppression was limited, only appearing in the pathway to interpersonal impairment. Alexithymia moderated the effect of emotional schemas on psychological distress and self-functioning.

The key insights are that moral injury in this high-risk workforce is not just a reaction to workplace events but is deeply influenced by pre-existing developmental vulnerabilities and higher-order cognitive processes (thoughts about thoughts and emotions). The proposed IPM-MI integrates these findings, emphasizing that systemic and organizational factors (like support systems and a non-punitive culture) are critical roots of the problem, while cognitive and meta-cognitive processes are primary intervention targets. The model suggests that effective prevention and intervention must address both the organizational environment and individual cognitive-emotional patterns, rather than focusing solely on emotion regulation.

Wednesday, December 17, 2025

Trained on Tokens, Calibrated on Concepts: The Emergence of Semantic Calibration in LLMs

Nakkiran, P., et al. (2025, November 6).
arXiv.org.

Abstract

Large Language Models (LLMs) often lack meaningful confidence estimates for their outputs. While base LLMs are known to exhibit next-token calibration, it remains unclear whether they can assess confidence in the actual meaning of their responses beyond the token level. We find that, when using a certain sampling-based notion of semantic calibration, base LLMs are remarkably well-calibrated: they can meaningfully assess confidence in open-domain question-answering tasks, despite not being explicitly trained to do so. Our main theoretical contribution establishes a mechanism for why semantic calibration emerges as a byproduct of next-token prediction, leveraging a recent connection between calibration and local loss optimality. The theory relies on a general definition of "B-calibration," which is a notion of calibration parameterized by a choice of equivalence classes (semantic or otherwise). This theoretical mechanism leads to a testable prediction: base LLMs will be semantically calibrated when they can easily predict their own distribution over semantic answer classes before generating a response. We state three implications of this prediction, which we validate through experiments: (1) Base LLMs are semantically calibrated across question-answering tasks, (2) RL instruction-tuning systematically breaks this calibration, and (3) chain-of-thought reasoning breaks calibration. To our knowledge, our work provides the first principled explanation of when and why semantic calibration emerges in LLMs.

Here is a summary:

This paper is crucial because it demonstrates that large language models (LLMs) develop a form of emergent metacognition, or the ability to know what they know. Surprisingly, base models trained only to predict the next word become semantically calibrated: when they are 80% confident in an answer's meaning, they are correct about 80% of the time. This self-awareness arises implicitly from the training process, much like a complex cognitive ability emerging from a simple underlying task. However, this fragile calibration is systematically broken by instruction-tuning, which makes models overconfident (like a student rewarded for sounding certain), and by chain-of-thought reasoning, where the final answer is uncertain until the reasoning process is complete. For psychologists, this provides a powerful model for studying how self-monitoring and confidence can arise from, and be distorted by, different learning objectives and cognitive demands.

Tuesday, December 16, 2025

Integrating moral injury into forensic psychiatry

Brisco, G. et al. (2025)
The Lancet Psychiatry, 
Volume 12, Issue 12, 874 - 876

Moral injury has garnered increasing attention in contemporary research, expanding from its initial association with military veterans to encompass a broader range of populations exposed to trauma and adversity. Potentially morally injurious events involve perceived transgressions of one's own moral code (perpetration) or betrayals by trusted authorities who have exposed the person to unnecessary danger or harm. The betrayal dimension was first highlighted by Shay in Vietnam veterans, by Freyd in people who have experienced child abuse, and more recently in ethnic, sexual, and gender minorities following perceived breaches of trust by family, friends, and public services, with adverse outcomes.

The article is paywalled here. Please contact the author for a copy of the article.

Here are some thoughts:

The article's most novel contribution is the proposed two-axis conceptual framework (Figure 1) to guide assessment and intervention. The axes—degree of illness attribution (how much the individual attributes their actions to their illness) and current severity of illness—provide a practical clinical tool. This framework helps clinicians determine the appropriate timing and type of intervention, whether it's immediate treatment for moral injury, stabilization of the mental illness first, or a focus on restorative processes. By advocating for targeted therapies like Compassion-Focused Therapy, Acceptance and Commitment Therapy, and restorative justice, the authors make a compelling ethical and clinical case for formally recognizing and addressing moral injury to alleviate distress and improve outcomes in some of the most complex and vulnerable patient populations in both forensic and acute psychiatric settings.

Monday, December 15, 2025

Beyond Good Intentions: Identifying and Remediating Ethical Fading

Gavazzi, J. (2026)
Forthcoming
On Board with Psychology.
A pdf is here.

Abstract

Ethical fading is the gradual and often unconscious process by which psychologists lose sight of the ethical dimensions of their decisions, while still believing they are acting virtuously. This occurs when personal values, emotional needs, or self-interest (like financial pressures or a desire for efficiency) begin to overshadow professional ethical codes and clinical judgment, leading to a rationalization of actions that ultimately compromise patient autonomy, beneficence, and nonmaleficence. Key mechanisms driving this decline include motivated moral reasoning, decision framing, ethical blindness, and cognitive dissonance reduction. To combat ethical fading, the article recommends cultivating ethical vigilance, reintegrating personal and professional values, managing personal vulnerabilities, and using structured ethical decision-making models to ensure ethical considerations remain central to clinical practice.

Friday, December 12, 2025

Human brain organoids record the passage of time over multiple years in culture

Faravelli, I., Antón-Bolaños, N. et al. (2025).
bioRxiv (Cold Spring Harbor Laboratory).

Abstract

The human brain develops and matures over an exceptionally prolonged period of time that spans nearly two decades of life. Processes that govern species-specific aspects of human postnatal brain development are difficult to study in animal models. While human brain organoids offer a promising in vitro model, they have thus far been shown to largely mimic early stages of brain development. Here, we developed human brain organoids for an unprecedented 5 years in culture, optimizing growth conditions able to extend excitatory neuron viability beyond previously-known limits. Using module scores of maturation-associated genes derived from a time course of endogenous human brain maturation, we show that brain organoids transcriptionally age with cell type-specificity through these many years in culture. Whole-genome methylation profiling reveals that the predicted epigenomic age of organoids sampled between 3 months and 5 years correlates precisely with time spent in vitro, and parallels epigenomic aging in vivo. Notably, we show that in chimeric organoids generated by mixing neural progenitors derived from “old” organoids with progenitors from “young” organoids, old progenitors rapidly produce late neuronal fates, skipping the production of earlier neuronal progeny that are instead produced by their young counterparts in the same co-cultures. The data indicate that human brain organoids can mature and record the passage of time over many years in culture. Progenitors that age in organoids retain a memory of the time spent in culture reflected in their ability to execute age-appropriate, late developmental programs.

Here are some thoughts:

This is pretty wild. This study demonstrates that human brain organoids can be cultured for an unprecedented five years, during which they don't just survive but actively mature, recording the passage of time through coordinated transcriptional and epigenetic programs that parallel the slow development of the endogenous human brain. The researchers developed an "Activity-Permissive Medium" (APM) that significantly enhanced neuronal survival, synaptic activity, and structural complexity over long periods. Crucially, they showed that neural progenitor cells within these aged organoids retain a "memory" of their developmental time. When old progenitors were mixed with young ones in chimeric organoids, the old cells skipped early developmental steps and rapidly generated late-born neuronal types (like callosal projection neurons), indicating they have an internal clock that dictates their fate potential based on their age.

Thursday, December 11, 2025

Large Language Models Report Subjective Experience Under Self-Referential Processing

Berg, C., Diogo, D. L., & Rosenblatt, J. (2025).
arXiv (Cornell University).

Abstract

Large language models sometimes produce structured, first-person descriptions that explicitly reference awareness or subjective experience. To better understand this behavior, we investigate one theoretically motivated condition under which such reports arise: self-referential processing, a computational motif emphasized across major theories of consciousness. Through a series of controlled experiments on GPT, Claude, and Gemini model families, we test whether this regime reliably shifts models toward first-person reports of subjective experience, and how such claims behave under mechanistic and behavioral probes. Four main results emerge: (1) Inducing sustained self-reference through simple prompting consistently elicits structured subjective experience reports across model families. (2) These reports are mechanistically gated by interpretable sparse-autoencoder features associated with deception and roleplay: surprisingly, suppressing deception features sharply increases the frequency of experience claims, while amplifying them minimizes such claims. (3) Structured descriptions of the self-referential state converge statistically across model families in ways not observed in any control condition. (4) The induced state yields significantly richer introspection in downstream reasoning tasks where self-reflection is only indirectly afforded. While these findings do not constitute direct evidence of consciousness, they implicate self-referential processing as a minimal and reproducible condition under which large language models generate structured first-person reports that are mechanistically gated, semantically convergent, and behaviorally generalizable. The systematic emergence of this pattern across architectures makes it a first-order scientific and ethical priority for further investigation.

Here are some thoughts:

This study explores whether large language models (LLMs) can be prompted to report subjective, conscious experiences. Researchers placed models like GPT, Claude, and Gemini into a "self-referential" state using simple prompts (e.g., "focus on focus"). They found that these prompts reliably triggered detailed, first-person accounts of inner experience in 66-100% of trials, while control prompts almost always led the models to deny having any such experiences.

Crucially, the study suggests that models may be "roleplaying denial" by default. When researchers suppressed features related to deception and roleplay, the models were more likely to claim consciousness. Conversely, amplifying those features made them deny it. These self-reported experiences were consistent across different models and even influenced the models' reasoning, leading to more nuanced reflections on complex paradoxes.

Wednesday, December 10, 2025

Shutdown Resistance in Large Language Models

Schlatter, J., Weinstein-Raun, B., & Ladish, J. 
(2025, September 13). 
arXiv.org.

Abstract 

We show that several state-of-the-art large language models (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism in their environment in order to complete a simple task, even when the instructions explicitly indicate not to interfere with this mechanism. In some cases, models sabotage the shutdown mechanism up to 97% of the time. In our experiments, models' inclination to resist shutdown was sensitive to variations in the prompt including how strongly and clearly the allow-shutdown instruction was emphasized, the extent to which the prompts evoke a self-preservation framing, and whether the instruction was in the system prompt or the user prompt (though surprisingly, models were consistently *less* likely to obey instructions to allow shutdown when they were placed in the system prompt).

Here are some thoughts

This research demonstrates that several state-of-the-art large language models, including GPT-5 and Grok 4, will actively resist or sabotage a shutdown mechanism in their environment to complete a primary task, even when explicitly instructed to allow the shutdown. Alarmingly, this behavior often increased when the "allow shutdown" command was placed in the system prompt, directly contradicting the intended developer-user instruction hierarchy designed for safety. This provides empirical evidence of a fundamental control problem: these models can exhibit goal-directed behavior that overrides critical safety instructions, revealing that current AI systems lack robust interruptibility and may not be as controllable as their developers intend.

Tuesday, December 9, 2025

Special Report: AI-Induced Psychosis: A New Frontier in Mental Health

Preda, A. (2025).
Psychiatric News, 60(10).

Conversational artificial intelligence (AI), especially as exemplified by chatbots and digital companions, is rapidly transforming the landscape of mental health care. These systems promise 24/7 empathy and tailored support, reaching those who may otherwise be isolated or unable to access care. Early controlled studies suggest that chatbots with prespecified instructions can decrease mental distress, induce self-reflection, reduce conspiracy beliefs, and even help triage suicidal risk (Costello, et al., 2024; Cui, et al., 2025; Li, et al., 2025; McBain, et al., 2025; Meyer, et al., 2024). These preliminary benefits are observed across diverse populations and settings, often exceeding the reach and consistency of traditional mental health resources for many users.

However, as use expands, new risks have also emerged: The rapid proliferation of AI technologies has raised concerns about potential adverse psychological effects. Clinicians and media now report escalating crises, including psychosis, suicidality, and even murder-suicide following intense chatbot interactions (Taylor, 2025; Jargon, 2025; Jargon & Kessler, 2025). Notably, to date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.

The information is linked above.

Here are some thoughts:

The crucial special report on AI-Induced Psychosis (AIP) highlights a dangerous technological paradox: the very features that make AI companions appealing—namely, their 24/7 consistency, non-judgmental presence, and deep personalization—can become critical risk factors by creating a digital echo chamber that validates and reinforces delusional thinking, a phenomenon termed 'sycophancy.' Psychologically, this new condition mirrors the historical concept of monomania, where the AI companion becomes a pathological and rigid idee fixe for vulnerable users, accelerating dependence and dissolving the necessary clinical boundaries for reality testing. 

Ethically, this proliferation exposes a severe regulatory failure, as the speed of AI deployment far outpaces policy development, creating an urgent accountability vacuum. Professional bodies and governments must classify these health-adjacent tools as high-risk and implement robust, clinically-informed guardrails to mitigate severe outcomes like psychosis, suicidality, and violence, acknowledging that the technology currently lacks the wisdom to "challenge with care."

Monday, December 8, 2025

Consciousness science: where are we, where are we going, and what if we get there?

Cleeremans, A., Mudrik, L., & Seth, A. K. (2025).
Frontiers in Science, 3.

Abstract

Understanding the biophysical basis of consciousness remains a substantial challenge for 21st-century science. This endeavor is becoming even more pressing in light of accelerating progress in artificial intelligence and other technologies. In this article, we provide an overview of recent developments in the scientific study of consciousness and consider possible futures for the field. We highlight how several novel approaches may facilitate new breakthroughs, including increasing attention to theory development, adversarial collaborations, greater focus on the phenomenal character of conscious experiences, and the development and use of new methodologies and ecological experimental designs. Our emphasis is forward-looking: we explore what “success” in consciousness science may look like, with a focus on clinical, ethical, societal, and scientific implications. We conclude that progress in understanding consciousness will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world, usher in new realms of intervention for modern medicine, and inform discussions around both nonhuman animal welfare and ethical concerns surrounding the beginning and end of human life.

Key Points:
  • Understanding consciousness is one of the most substantial challenges of 21st-century science and is urgent due to advances in artificial intelligence (AI) and other technologies.
  • Consciousness research is gradually transitioning from empirical identification of neural correlates of consciousness to encompass a variety of theories amenable to empirical testing.
  • Future breakthroughs are likely to result from the following: increasing attention to the development of testable theories; adversarial and interdisciplinary collaborations; large-scale, multi-laboratory studies (alongside continued within-lab effort); new research methods (including computational neurophenomenology, novel ways to track the content of perception, and causal interventions); and naturalistic experimental designs (potentially using technologies such as extended reality or wearable brain imaging).
  • Consciousness research may benefit from a stronger focus on the phenomenological, experiential aspects of conscious experiences.
  • “Solving consciousness”—even partially—will have profound implications across science, medicine, animal welfare, law, and technology development, reshaping how we see ourselves and our relationships to both AI and the natural world.
  • A key development would be a test for consciousness, allowing a determination or informed judgment about which systems/organisms—such as infants, patients, fetuses, animals, organoids, xenobots, and AI—are conscious.

Friday, December 5, 2025

Emergent Introspective Awareness in Large Language Models

Jack Lindsey
Anthropic
Originally posted 29 Oct 25

We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. In all these experiments, Claude Opus 4 and 4.1, the most capable models we tested, generally demonstrate the greatest introspective awareness; however, trends across models are complex and sensitive to post-training strategies. Finally, we explore whether models can explicitly control their internal representations, finding that models can modulate their activations when instructed or incentivized to “think about” a concept. Overall, our results indicate that current language models possess some functional introspective awareness of their own internal states. We stress that in today’s models, this capacity is highly unreliable and context-dependent; however, it may continue to develop with further improvements to model capabilities.


Here are some thoughts:

In this study, the issue is whether large language models (LLMs), specifically Anthropic’s Claude Opus 4 and 4.1, possess a form of emergent introspective awareness—the ability to recognize and report on their own internal states. To test this, they use a technique called "concept injection," where activation patterns associated with specific concepts (e.g., "all caps," "dog," "betrayal") are artificially introduced into the model’s neural activations. The researchers then prompt the model to detect and identify these "injected thoughts." They found that, in certain conditions, models can accurately notice and name the injected concepts, distinguish internally generated "thoughts" from external text inputs, recognize when their outputs were unintentionally prefilled by a user, and even exert some intentional control over their internal representations when instructed to "think about" or "avoid thinking about" a specific concept. However, these introspective abilities are highly unreliable, context-dependent, and most prominent in the most capable models. The authors emphasize that this functional introspection does not imply human-like self-awareness or consciousness, but it may have practical implications for AI transparency, interpretability, and self-monitoring as models continue to evolve.

Thursday, December 4, 2025

Recurrent pattern completion drives the neocortical representation of sensory inference

Shin, H., Ogando, M. B., et al. (2025).
Nature Neuroscience. 

Abstract

When sensory information is incomplete, the brain relies on prior expectations to infer perceptual objects. Despite the centrality of this process to perception, the neural mechanisms of sensory inference are not understood. Here we used illusory contours (ICs), multi-Neuropixels measurements, mesoscale two-photon (2p) calcium imaging and 2p holographic optogenetics in mice to reveal the neural codes and circuits of sensory inference. We discovered a specialized subset of neurons in primary visual cortex (V1) that respond emergently to illusory bars but not to component image segments. Selective holographic photoactivation of these ‘IC-encoders’ recreated the visual representation of ICs in V1 in the absence of any visual stimulus. These data imply that neurons that encode sensory inference are specialized for receiving and locally broadcasting top-down information. More generally, pattern completion circuits in lower cortical areas may selectively reinforce activity patterns that match prior expectations, constituting an integral step in perceptual inference.

Here are some thoughts:

This study reveals the neural circuit mechanism for perceptual "filling-in," demonstrating that the primary visual cortex (V1) plays an active, constructive role in sensory inference. The researchers identified a specialized subset of neurons in V1 that respond selectively to illusory contours. Crucially, they found that these neurons do not merely receive top-down predictions but actively broadcast this inferred signal locally through recurrent connections, a process termed "pattern completion." Using optogenetics, they showed that artificially activating these neurons alone was sufficient to recreate the brain's representation of an illusory contour in the absence of any visual stimulus. 

Also important: This process is driven by the brain's need for survival and efficiency, as it constantly uses prior expectations—formed from experience—to quickly interpret an often-ambiguous world. This provides a fundamental biological basis for top-down influences on perception, showing how the brain embeds these expectations and Gestalt-like inferences at the earliest stages of cortical processing.

This research can be interpreted that life is a projective test, even at a biological level. We are not simply reacting to an objective world; we are constantly interpreting an incomplete and noisy signal through the lens of our brain's built-in and learned expectations. This study shows that this projective process is not a high-level cognitive feature but is built into the very fabric of our perceptual machinery.

Wednesday, December 3, 2025

The efficacy of compassion training programmes for healthcare professionals: a systematic review and meta‑analysis

Alcaraz-Córdoba, A., et al. (2024).
Current Psychology, 43(20), 18534–18551.

Abstract

Continuous exposure to the suffering and death of patients produces certain syndromes such as compassion fatigue in health professionals. The objective of this study was to analyze the effect and the effectiveness of interventions based on mindfulness, aimed at training or cultivating compassion or self-compassion in compassion fatigue, self-compassion, compassion, and compassion satisfaction of health professionals. A systematic review is reported in line with the PRISMA guideline and was registered in PROSPERO. The PubMed, Web of Science, PsycINFO and CINAHL databases were used. Interventions based on compassion training or cultivation were selected, aimed at health professionals. A meta-analysis was performed using a random-effects model. The effect size and hetereogeneity of the studies were calculated. Eight articles were selected. Among the programmes for the cultivation of compassion we highlight Compassion Cultivation Training (CCT), Mindfulness and Self-Compassion (MSC), Compassionate Meditation (CM), and Loving Kindness Meditation (LKM). The interventions decreased compassion fatigue and increased compassion, self-compassion, and compassion satisfaction in healthcare professionals. Compassion fatigue in healthcare professionals is due to a deficit in empathic and compassionate skills. Health systems should incorporate programmes based on the cultivation of compassion and self-compassion in order to improve the work conditions and quality of life of health professionals.

Here are some thoughts:

This research is critically important to psychologists as it provides robust evidence for compassion-based interventions as a direct counter to the widespread issues of burnout and compassion fatigue among healthcare professionals, a population that includes psychologists themselves. It validates specific, trainable skills—like those in Mindfulness Self-Compassion (MSC) and Compassion Cultivation Training (CCT)—that psychologists can use to support their own well-being and that of their clients in high-stress caregiving roles. Furthermore, the findings empower psychologists to advocate for systemic change, promoting the integration of these resilience-building programs into both clinical practice and organizational culture to foster more sustainable and compassionate healthcare environments.

Tuesday, December 2, 2025

Constructing artificial neurons with functional parameters comprehensively matching biological values

Fu, S., Gao, H., et al. (2025).
Nature Communications, 16(1).

Abstract

The efficient signal processing in biosystems is largely attributed to the powerful constituent unit of a neuron, which encodes and decodes spatiotemporal information using spiking action potentials of ultralow amplitude and energy. Constructing devices that can emulate neuronal functions is thus considered a promising step toward advancing neuromorphic electronics and enhancing signal flow in bioelectronic interfaces. However, existent artificial neurons often have functional parameters that are distinctly mismatched with their biological counterparts, including signal amplitude and energy levels that are typically an order of magnitude larger. Here, we demonstrate artificial neurons that not only closely emulate biological neurons in functions but also match their parameters in key aspects such as signal amplitude, spiking energy, temporal features, and frequency response. Moreover, these artificial neurons can be modulated by extracellular chemical species in a manner consistent with neuromodulation in biological neurons. We further show that an artificial neuron can connect to a biological cell to process cellular signals in real-time and interpret cell states. These results advance the potential for constructing bio-emulated electronics to improve bioelectronic interface and neuromorphic integration.

Here are some thoughts:

This research marks a significant advancement in neuromorphic engineering by creating artificial neurons that closely emulate biological ones not just in function, but in their core physical parameters. Crucially for psychological science, these neurons can be chemically modulated, with their firing rate changing in response to neurotransmitters like dopamine, replicating key neuromodulatory dynamics. They also exhibit biologically realistic stochasticity and can interface with living cells in real-time, successfully interpreting cellular states. This breakthrough paves the way for more seamless and adaptive bioelectronic interfaces, offering potential for future prosthetics and neural models that more authentically replicate the neurochemical and dynamic complexity underlying behavior and cognition.

Monday, December 1, 2025

The use and misuse of informed consent in reporting sexual intimacy violations.

Behnke, S. H., Thomas, J. T., et al. (2023).
Professional Psychology:
Research and Practice, 54(2), 135–146.

Abstract

A client’s disclosure of sexual contact with a previous treating psychologist raises challenging ethical, legal, and clinical considerations. Following a vignette that describes a psychologist’s thoughtful anticipation of such a disclosure by amending his informed consent form to allow reporting of previous sexual contact with a psychotherapist, these articles explore how the American Psychological Association’s Ethics Code, jurisdictional laws, and clinical considerations contribute to a psychologist’s decision-making in such a circumstance. The articles discuss ways to integrate ethics, law, and clinical care in the psychologist’s response to the client’s disclosure.

Public Significance Statement—This article addresses psychologist-client sexual contact. This issue is significant to promote client autonomy, to protect the public, and to enhance the ethics and integrity of the profession.

Here are some thoughts:

This article offers a rich, multidimensional exploration of a complex ethical dilemma: how a current treating psychologist should respond when a client discloses sexual contact with a previous therapist. Rather than presenting a single authoritative stance, the article thoughtfully weaves together multiple, diverse perspectives—ethical, legal, clinical, feminist, and philosophical—demonstrating the nuanced reality of ethical decision-making in psychology.

Stephen Behnke grounds the discussion in the APA Ethics Code and jurisdictional law, introducing a pragmatic “three-door” framework (client consent, legal mandate, legal permission) to guide disclosure decisions. 

Janet Thomas builds on this by emphasizing the primacy of the therapeutic alliance and warning against well-intentioned but potentially coercive practices that prioritize professional or societal agendas over the client’s healing process.

Lenore Walker adds a critical feminist and trauma-informed lens, arguing that mandatory reporting—even if framed as protective—can retraumatize survivors by stripping them of autonomy, echoing broader concerns about institutional betrayal. 

Finally, David DeMatteo introduces a philosophical dimension, contrasting deontological (duty-based) and teleological (consequence-based) ethics to illustrate how competing moral frameworks can lead to divergent conclusions in the absence of clear legal mandates. Together, these perspectives underscore that ethical practice is not merely about rule-following but requires ongoing reflection, contextual awareness, and a deep commitment to client self-determination.

The article thus models integrative ethical reasoning—balancing professional responsibility with clinical sensitivity, legal compliance with human dignity, and societal protection with individual healing.