Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, June 30, 2025

Neural Processes Linking Interoception to Moral Preferences Aligned with Group Consensus

Kim, J., & Kim, H. (2025).
Journal of Neuroscience, e1114242025.

Abstract

Aligning one’s decisions with the prevailing norms and expectations of those around us constitutes a fundamental facet of moral decision-making. When faced with conflicting moral values, one adaptive approach is to rely on intuitive moral preference. While there has been theoretical speculation about the connection between moral preference and an individual’s awareness of introspective interoceptive signals, it has not been empirically examined. This study examines the relationships between individuals’ preferences in moral dilemmas and interoception, measured with self-report, heartbeat detection task, and resting-state fMRI. Two independent experiments demonstrate that both male and female participants’ interoceptive awareness and accuracy are associated with their moral preferences aligned with group consensus. In addition, the fractional occupancies of the brain states involving the ventromedial prefrontal cortex and the precuneus during rest mediate the link between interoceptive awareness and the degree of moral preferences aligned to group consensus. These findings provide empirical evidence of the neural mechanism underlying the link between interoception and moral preferences aligned with group consensus.

Significance statement

We investigate the intricate link between interoceptive ability to perceive internal bodily signals and decision-making when faced with moral dilemmas. Our findings reveal a significant correlation between the accuracy and awareness of interoceptive signals and the degree of moral preferences aligned with group consensus. Additionally, brain states involving the ventromedial prefrontal cortex and precuneus during rest mediate the link between interoceptive awareness and moral preferences aligned with group consensus. These findings provide empirical evidence that internal bodily signals play a critical role in shaping our moral intuitions according to others’ expectations across various social contexts.

Here are some thoughts:

A recent study highlighted that our moral decisions may be influenced by our body's internal signals, particularly our heartbeat. Researchers found that individuals who could accurately perceive their own heartbeats tended to make moral choices aligning with the majority, regardless of whether those choices were utilitarian or deontological. This implies that bodily awareness might unconsciously guide us toward socially accepted norms. Brain scans supported this, showing increased activity in areas associated with evaluation and judgment, like the medial prefrontal cortex, in those more attuned to their internal signals. While the study's participants were exclusively Korean college students, limiting generalizability, the findings open up intriguing possibilities about the interplay between bodily awareness and moral decision-making.

Sunday, June 29, 2025

Whistle-blowers – morally courageous actors in health care?

Wiisak, J., Suhonen, R., & Leino-Kilpi, H. (2022).
Nursing Ethics, 29(6), 1415–1429.

Abstract
Background

Moral courage means courage to act according to individual’s own ethical values and principles despite the risk of negative consequences for them. Research about the moral courage of whistle-blowers in health care is scarce, although whistleblowing involves a significant risk for the whistle-blower.

Objective
To analyse the moral courage of potential whistle-blowers and its association with their background variables in health care.

Research design
Was a descriptive-correlational study using a questionnaire, containing Nurses Moral Courage Scale©, a video vignette of the wrongdoing situation with an open question about the vignette, and several background variables. Data were analysed statistically and inductive content analysis was used for the narratives.

Participants and research context
Nurses as healthcare professionals (including registered nurses, public health nurses, midwives, and nurse paramedics) were recruited from the membership register of the Nurses’ Association via email in 2019. A total of 454 nurses responded. The research context was simulated using a vignette.

Ethical considerations
Good scientific inquiry guidelines were followed. Permission to use the Nurses’ Moral Courage Scale© was obtained from the copyright holder. The ethical approval and permission to conduct the study were obtained from the participating university and the Nurses’ Association.

Findings
The mean value of potential whistle-blowers’ moral courage on a Visual Analogue Scale (0–10) was 8.55 and the mean score was 4.34 on a 5-point Likert scale. Potential whistle-blowers’ moral courage was associated with their socio-demographics, education, work, personality and social responsibility related background variables.

Discussion and conclusion
In health care, potential whistle-blowers seem to be quite morally courageous actors. The results offer opportunities for developing interventions, practices and education to support and encourage healthcare professionals in their whistleblowing. Research is needed for developing a theoretical construction to eventually increase whistleblowing and decrease and prevent wrongdoing.

Here are some thoughts:

This study investigates the moral courage of healthcare professionals in whistleblowing scenarios. Utilizing a descriptive-correlational design, the researchers surveyed 454 nurses—including registered nurses, public health nurses, midwives, and nurse paramedics—using the Nurses' Moral Courage Scale, a video vignette depicting a wrongdoing situation, and open-ended questions. Findings revealed a high level of moral courage among participants, with an average score of 8.55 on a 0–10 Visual Analogue Scale and 4.34 on a 5-point Likert scale. The study identified associations between moral courage and various background factors such as socio-demographics, education, work experience, personality traits, and social responsibility. The authors suggest that these insights can inform the development of interventions and educational programs to support and encourage whistleblowing in healthcare settings, ultimately aiming to reduce and prevent unethical practices

Saturday, June 28, 2025

An Update on Psychotherapy for the Treatment of PTSD

Rothbaum, B. O., & Watkins, L. E. (2025).
American Journal of Psychiatry, 182(5), 424–437.

Abstract

Posttraumatic stress disorder (PTSD) symptoms are part of the normal response to trauma. Most trauma survivors will recover over time without intervention, but a significant minority will develop chronic PTSD, which is unlikely to remit without intervention. Currently, only two medications, sertraline and paroxetine, are approved by the U.S. Food and Drug Administration to treat PTSD, and the combination of brexpiprazole and sertraline and MDMA-assisted therapy have FDA applications pending. These medications, and the combination of pharmacotherapy and psychotherapy, are not recommended as first-line treatments in any published PTSD treatment guidelines. The only interventions recommended as first-line treatments are trauma-focused psychotherapies; the U.S. Department of Veterans Affairs/Department of Defense PTSD treatment guideline recommends prolonged exposure (PE), cognitive processing therapy (CPT), and eye movement desensitization and reprocessing, and the American Psychological Association PTSD treatment guideline recommends PE, CPT, cognitive therapy, and trauma-focused cognitive-behavioral therapy. Although published clinical trials of psychedelic-assisted psychotherapy have not incorporated evidence-based PTSD psychotherapies, they have achieved greater response rates than other trials of combination treatment, and there is some enthusiasm about combining psychedelic medications with evidence-based psychotherapies. The state-of-the-art PTSD psychotherapies are briefly reviewed here, including their effects on clinical and neurobiological measures.

The article is paywalled, unfortuantely.

Here is a summary and some thoughts.

In the evolving landscape of PTSD treatment, Rothbaum and Watkins reaffirm a crucial truth: trauma-focused psychotherapies remain the first-line, evidence-based interventions for posttraumatic stress disorder (PTSD), outperforming pharmacological approaches in both efficacy and durability.

The State of PTSD Treatment
While most individuals naturally recover from trauma, a significant minority develop chronic PTSD, which typically requires intervention. Current FDA-approved medications for PTSD—sertraline and paroxetine—offer only modest relief, and recent psychedelic-assisted therapy trials, though promising, have not yet integrated evidence-based psychotherapy approaches. As such, expert guidelines consistently recommend trauma-focused psychotherapies as first-line treatments.

Evidence-Based Therapies at the Core
The VA/DoD and APA guidelines converge on recommending prolonged exposure (PE) and cognitive processing therapy (CPT), with eye movement desensitization and reprocessing (EMDR), cognitive therapy, and trauma-focused CBT also strongly supported.

PE helps patients systematically confront trauma memories and triggers to promote extinction learning. Its efficacy is unmatched, with robust support from meta-analyses and neurobiological studies.

CPT targets maladaptive beliefs that develop after trauma, helping patients reframe distorted thoughts through cognitive restructuring.

EMDR, though somewhat controversial, remains a guideline-supported approach and continues to show effectiveness in trials.

Neurobiological Insights
Modern neuroscience supports these therapies: PTSD involves hyperactivation of fear and salience networks (e.g., amygdala) and underactivation of emotion regulation circuits (e.g., prefrontal cortex). Successful treatment—especially exposure-based therapy—enhances extinction learning and improves functional connectivity in these circuits. Moreover, cortisol patterns, genetic markers, and cardiovascular reactivity are emerging as potential predictors of treatment response.

Innovations and Expansions
Therapists are increasingly utilizing massed formats (e.g., daily sessions over 2 weeks), virtual reality exposure therapy, and early interventions in emergency settings. These models show high completion rates and comparable outcomes to traditional weekly formats.

One particularly innovative direction involves MDMA-assisted psychotherapy. Although still investigational, trials show higher remission rates when MDMA is paired with psychotherapy. The METEMP protocol (MDMA-enhanced PE) offers a translational model that integrates the strengths of both approaches.

Addressing Clinical Challenges
High dropout rates (27–50%) remain a concern, largely due to avoidance—a core PTSD symptom. Massed therapy formats have demonstrated improved retention. Additionally, comorbid conditions (e.g., depression, TBI, substance use) generally do not impede response to trauma-focused care and can be concurrently treated using integrated protocols like COPE (Concurrent Treatment of PTSD and Substance Use Disorders Using PE).

Toward Greater Access and Remission
Despite strong evidence, access to high-quality trauma-focused therapy remains limited outside military and VA systems. Telehealth, stepped care models, and broader dissemination of evidence-based practices are key to closing this gap.

Finally, Rothbaum and Watkins argue that remission—not just symptom reduction—must be the treatment goal. With renewed scientific rigor and integrative innovations like MDMA augmentation, the field is inching closer to more effective and enduring treatments.

Friday, June 27, 2025

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

Kosmyna, N. K. et al. (2025).

Abstract

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.


Here are some thoughts:

This research is important for psychologists because it provides empirical evidence on how using large language models (LLMs) like ChatGPT, traditional search engines, or relying solely on one’s own cognition affects cognitive engagement, neural connectivity, and perceived ownership during essay writing tasks. The study used EEG to measure brain activity and found that participants who wrote essays unaided (Brain-only group) exhibited the highest neural connectivity and cognitive engagement, while those using LLMs showed the weakest. Notably, repeated LLM use led to reduced memory recall, lower perceived ownership of written work, and diminished ability to quote from their own essays, suggesting a measurable cognitive cost and potential decrease in learning skills. The findings highlight that while LLMs can provide immediate benefits, their use may undermine deeper learning and engagement, which has significant implications for educational practices and the integration of AI tools in learning environments.

Thursday, June 26, 2025

A Modular Spiking Neural Network-Based Neuro-Robotic System for Exploring Embodied Intelligence

Chen, Z., Sun, T., et al. (2024). 
2022 International Conference on
Advanced Robotics and Mechatronics (ICARM)
1093–1098.

Abstract

Bio-inspired construction of modular biological neural networks (BNNs) is gaining attention due to their innate stable inter-modular signal transmission ability, which is thought to underlying the emergence of biological intelligence. However, the complicated, laborious fabrication of BNNs with structural and functional connectivity of interest in vitro limits the further exploration of embodied intelligence. In this work, we propose a modular spiking neural network (SNN)-based neuro-robotic system by concurrently running SNN modeling and robot simulation. We show that the modeled mSNNs present complex calcium dynamics resembling mBNNs. In particular, spontaneous periodic network-wide bursts were observed in the mSNN, which could be further suppressed partially or completely with global chemical modulation. Moreover, we demonstrate that after complete suppression, intermodular signal transmission can still be evoked reliably via local stimulation. Therefore, the modeled mSNNs could either achieve reliable trans-modular signal transmission or add adjustable false-positive noise signals (spontaneous bursts). By interconnecting the modeled mSNNs with the simulated mobile robot, active obstacle avoidance and target tracking can be achieved. We further show that spontaneous noise impairs robot performance, which indicates the importance of suppressing spontaneous burst activities of modular networks for the reliable execution of robot tasks. The proposed neuro-robotic system embodies spiking neural networks with a mobile robot to interact with the external world, which paves the way for exploring the arising of more complex biological intelligence.

Here are some thoughts:

This paper is pretty wild. These researchers wanted to create an AI that simulates human brain activity embodied within a simulated mobile robot. The AI simulates calcium spiking in the brain, and the AI modules apparently communicate with each other. Quieting the spiking made the AI simulated robotic system more efficient. Here are some thoughts:

Cognitive neuroscience seeks to uncover how neural activity gives rise to perception, decision-making, and behavior, often by studying the dynamics of brain networks. This research contributes significantly to that goal by modeling modular spiking neural networks (mSNNs) that replicate key features of biological neural networks, including spontaneous network bursts and inter-modular communication. These modeled networks demonstrate how structured neural activity can support reliable signal transmission, a fundamental aspect of cognitive processing. Importantly, they also allow for controlled manipulation of network states—such as through global chemical modulation—which provides a way to study how noise or spontaneous activity affects information processing.

From an ethical standpoint, this research presents a valuable alternative to invasive or in vitro biological experiments. Traditional studies involving living neural tissue raise ethical concerns regarding animal use and the potential for suffering. By offering a synthetic yet biologically plausible model, this work reduces reliance on such methods while still enabling detailed exploration of neural dynamics. Furthermore, it opens new avenues for non-invasive experimentation in cognitive and clinical domains, aligning with ethical principles that emphasize minimizing harm and maximizing scientific benefit.

Wednesday, June 25, 2025

Neuron–astrocyte associative memory

Kozachkov, L., Slotine, J., & Krotov, D. (2025).
Proceedings of the National Academy of Sciences, 
122(21).

Abstract

Astrocytes, the most abundant type of glial cell, play a fundamental role in memory. Despite most hippocampal synapses being contacted by an astrocyte, there are no current theories that explain how neurons, synapses, and astrocytes might collectively contribute to memory function. We demonstrate that fundamental aspects of astrocyte morphology and physiology naturally lead to a dynamic, high-capacity associative memory system. The neuron–astrocyte networks generated by our framework are closely related to popular machine learning architectures known as Dense Associative Memories. Adjusting the connectivity pattern, the model developed here leads to a family of associative memory networks that includes a Dense Associative Memory and a Transformer as two limiting cases. In the known biological implementations of Dense Associative Memories, the ratio of stored memories to the number of neurons remains constant, despite the growth of the network size. Our work demonstrates that neuron–astrocyte networks follow a superior memory scaling law, outperforming known biological implementations of Dense Associative Memory. Our model suggests an exciting and previously unnoticed possibility that memories could be stored, at least in part, within the network of astrocyte processes rather than solely in the synaptic weights between neurons.

Significance

Recent experiments have challenged the belief that glial cells, which compose at least half of brain cells, are just passive support structures. Despite this, a clear understanding of how neurons and glia work together for brain function is missing. To close this gap, we present a theory of neuron–astrocytes networks for memory processing, using the Dense Associative Memory framework. Our findings suggest that astrocytes can serve as natural units for implementing this network in biological “hardware.” Astrocytes enhance the memory capacity of the network. This boost originates from storing memories in the network of astrocytic processes, not just in synapses, as commonly believed. These process-to-process communications likely occur in the brain and could help explain its impressive memory processing capabilities.

Here are some thoughts:

This research represents a paradigm shift in our understanding of memory formation and storage. The paper examines how "astrocytes, the most abundant type of glial cell, play a fundamental role in memory" and notes that "most hippocampal synapses being contacted by an astrocyte."

For psychologists, this is revolutionary because it challenges the traditional neuron-centric view of memory. Previously, memory research focused almost exclusively on neuronal connections and synaptic plasticity. This study demonstrates that astrocytes - previously thought to be merely supportive cells - are active participants in memory processes. This has profound implications for:

Cognitive Psychology: It suggests memory formation involves a more complex cellular network than previously understood, potentially explaining individual differences in memory capacity and the mechanisms behind memory consolidation.

Learning Theory: The findings may require updating models of how associative learning occurs at the cellular level, moving beyond simple neuronal networks to include glial participation.

Memory Disorders: Understanding astrocyte involvement opens new avenues for researching conditions like Alzheimer's disease, where both neuronal and glial dysfunction occur.

Significance for Psychopharmacology

This research has transformative implications for drug development and treatment approaches:

Novel Drug Targets: If astrocytes are crucial for memory, pharmaceutical interventions could target astrocytic functions rather than focusing solely on neuronal receptors. This could lead to entirely new classes of cognitive enhancers or treatments for memory disorders.

Mechanism of Action: Many psychoactive drugs may work partially through astrocytic pathways that weren't previously recognized. This could explain why some medications have effects that aren't fully accounted for by their known neuronal targets.

Treatment Resistance: Some patients who don't respond to traditional neurotropic medications might benefit from drugs that target the astrocyte-neuron memory system.

Precision Medicine: Understanding the dual neuron-astrocyte system could help explain why individuals respond differently to the same medications, leading to more personalized treatment approaches.

This research fundamentally expands our understanding of the biological basis of memory beyond neurons to include the brain's most abundant cell type, potentially revolutionizing both theoretical frameworks in psychology and therapeutic approaches in psychopharmacology.

Tuesday, June 24, 2025

Why Do More Police Officers Die by Suicide Than in the Line of Duty?

Jaime Thompson
The New York Times
Originally published 8 May 25

Here is an excerpt:

American policing has paid much attention to the dangers faced in the line of duty, from shootouts to ambushes, but it has long neglected a greater threat to officers: themselves. More cops kill themselves every year than are killed by suspects. At least 184 public-safety officers die by suicide each year, according to First H.E.L.P., a nonprofit that has been collecting data on police suicide since 2016. An average of about 57 officers are killed by suspects every year, according to statistics from the Federal Bureau of Investigation. After analyzing data on death certificates, Dr. John Violanti, a research professor at the University at Buffalo, concluded that law-enforcement officers are 54 percent more likely to die by suicide than the average American worker. A lack of good data, however, has thwarted researchers, who have struggled to reach consensus on the problem’s scope. Recognizing the problem, Congress passed a law in 2020 requiring the F.B.I. to collect data on police suicide, but reporting remains voluntary.

“Suicide is something you just didn’t talk about in law enforcement,” says Chuck Wexler, the executive director of the Police Executive Research Forum (PERF). “It was shameful. It was weakness.” But a growing body of research has shown how chronic exposure to stress and trauma can impact the brain, causing impaired thinking, poor decision-making, a lack of empathy and difficulty distinguishing between real and perceived threats. Those were the very defects on display in the high-profile videos of police misconduct that looped across the country leading up to the killing of George Floyd by an officer in 2020. National outrage and widespread protests against the police were experienced as further stress by a force that already was, by many metrics, mentally and physically unwell. PERF now calls police suicide the “No. 1 officer-safety issue.”


Here are some thoughts:

Police officers unfortunately face a significantly elevated risk of suicide compared to the general population, a grim reality that tragically surpasses even the dangers they encounter in the line of duty. This heightened risk is often attributed to the cumulative impact of repeated exposure to traumatic events, which can lead to the development of mental health challenges such as post-traumatic stress disorder (PTSD), depression, and anxiety. Sadly, some officers may turn to substance abuse as a way to cope with these intense emotional burdens, which can further compound their difficulties. Research indicates that the rates of depression among law enforcement officers are nearly twice that of the general public, highlighting the profound psychological toll of their profession. Compounding this issue is the cultural environment within law enforcement, which can often discourage officers from seeking help for mental health concerns due to the prevailing stigma and fears of being perceived as weak or unfit for their duties. Consequently, there is a pressing need for the development and implementation of readily accessible and confidential mental health resources specifically designed to meet the unique needs of the law enforcement community. These resources should include peer support programs and trauma-informed care approaches to foster a culture of well-being and encourage officers to seek the support they deserve.

Monday, June 23, 2025

Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation

Tierney, A. A.,  et al. (2024).
NEJM Catalyst, 5(3).

Abstract

Clinical documentation in the electronic health record (EHR) has become increasingly burdensome for physicians and is a major driver of clinician burnout and dissatisfaction. Time dedicated to clerical activities and data entry during patient encounters also negatively affects the patient–physician relationship by hampering effective and empathetic communication and care. Ambient artificial intelligence (AI) scribes, which use machine learning applied to conversations to facilitate scribe-like capabilities in real time, has great potential to reduce documentation burden, enhance physician–patient encounters, and augment clinicians’ capabilities. The technology leverages a smartphone microphone to transcribe encounters as they occur but does not retain audio recordings. To address the urgent and growing burden of data entry, in October 2023, The Permanente Medical Group (TPMG) enabled ambient AI technology for 10,000 physicians and staff to augment their clinical capabilities across diverse settings and specialties. The implementation process leveraged TPMG’s extensive experience in large-scale technology instantiation and integration incorporating multiple training formats, at-the-elbow peer support, patient-facing materials, rapid-cycle upgrades with the technology vendor, and ongoing monitoring. In 10 weeks since implementation, the ambient AI tool has been used by 3,442 TPMG physicians to assist in as many as 303,266 patient encounters across a wide array of medical specialties and locations. In total, 968 physicians have enabled ambient AI scribes in ≥100 patient encounters, with one physician having enabled it to assist in 1,210 encounters. The response from physicians who have used the ambient AI scribe service has been favorable; they cite the technology’s capability to facilitate more personal, meaningful, and effective patient interactions and to reduce the burden of after-hours clerical work. In addition, early assessments of patient feedback have been positive, with some describing improved interaction with their physicians. Early evaluation metrics, based on an existing tool that evaluates the quality of human-generated scribe notes, find that ambient AI use produces high-quality clinical documentation for physicians’ editing. Further statistical analyses after AI scribe implementation also find that usage is linked with reduced time spent in documentation and in the EHR. Ongoing enhancements of the technology are needed and are focused on direct EHR integration, improved capabilities for incorporating medical interpretation, and enhanced workflow personalization options for individual users. Despite this technology’s early promise, careful and ongoing attention must be paid to ensure that the technology supports clinicians while also optimizing ambient AI scribe output for accuracy, relevance, and alignment in the physician–patient relationship.

Key Takeaways

• Ambient artificial intelligence (AI) scribes show early promise in reducing clinicians’ burden, with a regional pilot noting a reduction in the amount of time spent constructing notes among users.

• Ambient AI scribes were found to be acceptable among clinicians and patients, largely improving the experience of both parties, with some physicians noting the transformational nature of the technology on their care.

• Although a review of 35 AI-generated transcripts resulted in an average score of 48 of 50 in 10 key domains, AI scribes are not a replacement for clinicians. They can produce inconsistencies that require physicians’ review and editing to ensure that they remain aligned with the physician–patient relationship.

• Given the incredible pace of change, building a dynamic evaluation framework is essential to assess the performance of AI scribes across domains including engagement, effectiveness, quality, and safety.

Sunday, June 22, 2025

This article won’t change your mind. Here’s why

Lubrano, S. S. (2025, May 18).
The Guardian.

Here is an excerpt:

There are lots of reasons why debate (and indeed, information-giving and argumentation in general) tends to be ineffective at changing people’s political beliefs. Cognitive dissonance, a phenomenon I studied as part of my PhD research, is one. This is the often unconscious psychological discomfort we feel when faced with contradictions in our own beliefs or actions, and it has been well documented. We can see cognitive dissonance and its effects at work when people rapidly “reason” in ways that are really attempts to mitigate their discomfort with new information about strongly held beliefs. For example, before Trump was convicted of various charges in 2024, only 17% of Republican voters believed felons should be able to be president; directly after his conviction, that number rose to 58%. To reconcile two contradictory beliefs (that presidents shouldn’t do x, and that Trump should be president), an enormous number of Republican voters simply changed their mind about the former. In fact, Republican voters shifted their views on more or less all the things Trump had been convicted of: fewer felt it was immoral to have sex with a porn star, pay someone to stay silent about an affair, or falsify a business record. Nor is this effect limited to Trump voters: research suggests we all rationalise in this way, in order to hold on to the beliefs that let us keep operating as we have been. Or, ironically, to change some of our beliefs in response to new information, but often only in order to not have to sacrifice other strongly held beliefs.

But it’s not just psychological phenomena like cognitive dissonance that make debates and arguments relatively ineffective. As I lay out in my book, probably the most important reason words don’t change minds is that two other factors carry far more influence: our social relationships; and our own actions and experiences.

Here are some thoughts:

The article discusses how people often resist changing their minds, even when presented with strong evidence, due to the psychological and social costs involved. It explains that beliefs are deeply tied to personal identity and social relationships, making individuals reluctant to alter them to avoid feelings of inconsistency or social rejection. The psychological mechanism at play is cognitive dissonance, where holding contradictory beliefs causes discomfort, leading people to reject new information that conflicts with their existing views. Additionally, motivated reasoning drives individuals to interpret evidence in a way that aligns with their preexisting beliefs to maintain emotional and social harmony. The article suggests that fostering open, non-confrontational discussions and emphasizing shared values can help reduce resistance to changing one’s mind, as it lessens the perceived threat to identity and social bonds.

Persuading people is a lot like psychotherapy because both require creating a safe, non-judgmental space where individuals can explore conflicting beliefs without feeling defensive, allowing change to emerge from within rather than through forceful confrontation.

Saturday, June 21, 2025

A Framework for Language Technologies in Behavioral Research and Clinical Applications: Ethical Challenges, Implications, and Solutions

Diaz-Asper, C., Hauglid, M. K., et al. (2024).
American Psychologist, 79(1), 79–91.

Abstract

Technological advances in the assessment and understanding of speech and language within the domains of automatic speech recognition, natural language processing, and machine learning present a remarkable opportunity for psychologists to learn more about human thought and communication, evaluate a variety of clinical conditions, and predict cognitive and psychological states. These innovations can be leveraged to automate traditionally time-intensive assessment tasks (e.g., educational assessment), provide psychological information and care (e.g., chatbots), and when delivered remotely (e.g., by mobile phone or wearable sensors) promise underserved communities greater access to health care. Indeed, the automatic analysis of speech provides a wealth of information that can be used for patient care in a wide range of settings (e.g., mHealth applications) and for diverse purposes (e.g., behavioral and clinical research, medical tools that are implemented into practice) and patient types (e.g., numerous psychological disorders and in psychiatry and neurology). However, automation of speech analysis is a complex task that requires the integration of several different technologies within a large distributed process with numerous stakeholders. Many organizations have raised awareness about the need for robust systems for ensuring transparency, oversight, and regulation of technologies utilizing artificial intelligence. Since there is limited knowledge about the ethical and legal implications of these applications in psychological science, we provide a balanced view of both the optimism that is widely published on and also the challenges and risks of use, including discrimination and exacerbation of structural inequalities.

Public Significance Statement

Computational advances in the domains of automatic speech recognition, natural language processing, and machine learning allow for the rapid and accurate assessment of a person’s speech for numerous purposes. The widespread adoption of these technologies permits psychologists an opportunity to learn more about psychological function, interact in new ways with research participants and patients, and aid in the diagnosis and management of various cognitive and mental health conditions. However, we argue that the current scope of the APA’s Ethical Principles of Psychologists and Code of Conduct is insufficient to address the ethical issues surrounding the application of artificial intelligence. Such a gap in guidance results in the onus falling directly on psychologists to educate themselves about the ethical and legal implications of these emerging technologies potentially exacerbating the risk of their use in both research and practice.

Friday, June 20, 2025

Artificial intelligence and free will: generative agents utilizing large language models have functional free will

Martela, F. (2025).
AI And Ethics.

Abstract

Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback. Do such generative LLM agents possess free will? Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions. Building on Dennett’s intentional stance and List’s theory of free will, I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior. Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.

Here are some thoughts:

This article explores whether advanced AI systems, particularly generative agents using large language models (LLMs), possess free will. The author argues that while these AI agents may not have “physical free will,” meaning the ability to alter physical causal chains, they do exhibit “functional free will”. Functional free will is defined as the capacity to display intentional agency, recognize genuine alternatives, and control actions based on internal intentions. The article uses examples like Voyager, an AI agent in Minecraft, and Spitenik, a hypothetical autonomous drone, to illustrate how these systems meet the criteria for functional free will.

This research is important for psychologists because it challenges traditional views on free will, which often center on human consciousness and metaphysical considerations. It compels psychologists to reconsider how we attribute agency and decision-making to various entities, including AI, and how this attribution shapes our understanding of behavior

Thursday, June 19, 2025

Large Language Model (LLM) Algorithms in Reshaping Decision-Making and Cognitive Biases in the AI-Leading World: An Experimental Study.

Khatoon, H., Khan, M. L., & Irshad, A. 
(2025, January 22). PsyArXiv

Abstract

The rise of artificial intelligence (AI) has accelerated decision-making since AI algorithmic recommendation may help reduce human limitations while increasing decision accuracy and efficiency. Large language model (LLM) algorithms are designed to enhance human decision-making competencies and remove possible cognitive biases. However, these algorithms can be biased and lead to poor decision-making. Building on previously existing LLM algorithm (i.e., ChatGPT and Perplexity.ai), this study examines whether users who get AI assistance during task-based decision-making have greater decision-making abilities than their peers who employ their own cognitive processes to make decisions. By using domain-independent LLM , incentives, and scenario-based task decisions, we find that the advice suggested by these AIs in the decisive situations were biased and wrong, and that resulted in poor decision outcomes. It has been observed that using public access LLM in crucial situations might result in both ineffective outcomes for the advisee and inadvertent consequences for third parties. Findings highlight the need of having an ethical AI algorithm and the ability to accurately assess trust in order to effectively deploy these systems. This raises concerns regarding the use of AI in decision making with careful assistance.

Here are some thoughts:

This research is important to psychologists because it examines how collaboration with large language models (LLMs) like ChatGPT affects human decision-making, particularly in relation to cognitive biases. By using a modified Adult Decision-Making Competence battery, the study offers empirical data on whether AI assistance improves or impairs judgment. It highlights the psychological dynamics of trust in AI, the risk of overreliance, and the ethical implications of using AI in decisions that impact others. These findings are especially relevant for psychologists interested in cognitive bias, human-technology interaction, and the integration of AI into clinical, organizational, and educational settings.

Wednesday, June 18, 2025

The Role of Emotion Dysregulation in Understanding Suicide Risk: A Systematic Review of the Literature

Rogante, E.,  et al. (2024).
Healthcare, 12(2), 169.

Abstract
Suicide prevention represents a global imperative, and efforts to identify potential risk factors are intensifying. Among these, emotional regulation abilities represent a transdiagnostic component that may have an impactful influence on suicidal ideation and behavior. Therefore, the present systematic review aimed to investigate the association between emotion dysregulation and suicidal ideation and/or behavior in adult participants. The review followed PRISMA guidelines, and the research was performed through four major electronic databases (PubMed/MEDLINE, Scopus, PsycInfo, and Web of Science) for relevant titles/abstracts published from January 2013 to September 2023. The review included original studies published in peer-reviewed journals and in English that assessed the relationship between emotional regulation, as measured by the Difficulties in Emotional Regulation Scale (DERS), and suicidal ideation and/or behavior. In total, 44 studies were considered eligible, and the results mostly revealed significant positive associations between emotion dysregulation and suicidal ideation, while the findings on suicide attempts were more inconsistent. Furthermore, the findings also confirmed the role of emotion dysregulation as a mediator between suicide and other variables. Given these results, it is important to continue investigating these constructs and conduct accurate assessments to implement effective person-centered interventions.

Here are some thoughts. I used this research in a recent article.

This systematic review explores the role of emotion dysregulation in understanding suicide risk among adults, analyzing 44 studies that assess the association between emotional regulation difficulties—measured primarily by the Difficulties in Emotion Regulation Scale (DERS)—and suicidal ideation and behavior. The findings largely support a significant positive correlation between emotion dysregulation and suicidal ideation across both clinical and nonclinical populations. Specific dimensions of emotion dysregulation, such as impulsivity, lack of emotional clarity, and ineffective use of regulatory strategies, were particularly linked to increased suicidal thoughts. However, results regarding suicide attempts were more inconsistent, with some studies showing a strong link while others found no significant associations.

The review also highlights the mediating role of emotion dysregulation between various risk factors (e.g., childhood trauma, psychopathy, depression) and suicidal outcomes. Emotion dysregulation appears to amplify suicide risk by influencing how individuals cope with psychological pain and stress. Despite methodological limitations—including reliance on self-report measures, sample heterogeneity, and limited longitudinal data—the evidence suggests that improving emotional regulation could be a valuable target for suicide prevention strategies. The authors recommend further research using robust statistical methods and comprehensive assessments to better understand causal pathways and enhance intervention effectiveness.

Tuesday, June 17, 2025

Ethical implication of artificial intelligence (AI) adoption in financial decision making.

Owolabi, O. S., Uche, P. C., et al. (2024).
Computer and Information Science, 17(1), 49.

Abstract

The integration of artificial intelligence (AI) into the financial sector has raised ethical concerns that need to be addressed. This paper analyzes the ethical implications of using AI in financial decision-making and emphasizes the importance of an ethical framework to ensure its fair and trustworthy deployment. The study explores various ethical considerations, including the need to address algorithmic bias, promote transparency and explainability in AI systems, and adhere to regulations that protect equity, accountability, and public trust. By synthesizing research and empirical evidence, the paper highlights the complex relationship between AI innovation and ethical integrity in finance. To tackle this issue, the paper proposes a comprehensive and actionable ethical framework that advocates for clear guidelines, governance structures, regular audits, and collaboration among stakeholders. This framework aims to maximize the potential of AI while minimizing negative impacts and unintended consequences. The study serves as a valuable resource for policymakers, industry professionals, researchers, and other stakeholders, facilitating informed discussions, evidence-based decision-making, and the development of best practices for responsible AI integration in the financial sector. The ultimate goal is to ensure fairness, transparency, and accountability while reaping the benefits of AI for both the financial sector and society.

Here are some thoughts:

This paper explores the ethical implications of using artificial intelligence (AI) in financial decision-making.  It emphasizes the necessity of an ethical framework to ensure AI is used fairly and responsibly.  The study examines ethical concerns like algorithmic bias, the need for transparency and explainability in AI systems, and the importance of regulations that protect equity, accountability, and public trust.  The paper also proposes a comprehensive ethical framework with guidelines, governance structures, regular audits, and stakeholder collaboration to maximize AI's potential while minimizing negative impacts.

These themes are similar to concerns in using AI in the practice of psychology. Also, psychologists may need to be aware of these issues for their own financial and wealth management.

Monday, June 16, 2025

The impact of AI errors in a human-in-the-loop process

Agudo, U., Liberal, K. G., et al. (2024).
Cognitive Research Principles and 
Implications, 9(1).

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.

Here are some thoughts:


This study explores the impact of AI errors in human-in-the-loop processes, where humans and AI systems collaborate in decision-making.  The research specifically investigates how the timing of AI support influences human judgment and decision accuracy.  The findings indicate that human judgment is negatively affected by incorrect algorithmic support, particularly when provided before the human's own judgment, leading to decreased accuracy.  This research highlights the complexities of human-computer interaction in automated decision-making contexts and emphasizes the need for a deeper understanding of how AI support systems can be effectively integrated to minimize errors and biases.    

This is important for psychologists because it sheds light on the cognitive biases and decision-making processes involved when humans interact with AI systems, which is an increasingly relevant area of study in the field.  Understanding these interactions can help psychologists develop interventions and strategies to mitigate negative impacts, such as automation bias, and improve the design of human-computer interfaces to optimize decision-making accuracy and reduce errors in various sectors, including public service, healthcare, and justice. 

Sunday, June 15, 2025

Relationship between Personal Ethics and Burnout: The Unexpected Influence of Affective Commitment

Santiago-Torner, C., et al. (2024).
Administrative Sciences, 14(6), 123.

Abstract

Objective: Ethical climates and their influence on emotional health have been the subject of intense debates. However, Personal Ethics as a potential resource that can mitigate Burnout syndrome has gone unnoticed. Therefore, the main objective of this study is to examine the effect of Personal Ethics on the three dimensions that constitute Burnout, considering the moderating influence of Affective Commitment. 

Design/methodology: A model consisting of three simple moderations is used to solve this question. The sample includes 448 professionals from the Colombian electricity sector with university-qualified education. 

Findings: Personal Ethics mitigates Emotional Exhaustion and Depersonalization, but it is not related to Personal Realization. Affective Commitment, unexpectedly, has an inverse moderating effect. In other words, as this type of commitment intensifies, the positive impact of Personal Ethics on Burnout and Depersonalization decreases until it disappears. Furthermore, Affective Commitment does not influence the dynamic between Personal Ethics and self-realization. 

Research limitations/implications: A longitudinal study would strengthen the causal relationships established in this research. Practical implications: Alignment of values between the individual and the organization is crucial. In fact, integration between the organization and its personnel through organic, open and connected structures increases psychological well-being through values linked to benevolence and understanding. 

Social implications: Employees’ emotional health is transcendental beyond the organizational level, as it has a significant impact on personal and family interactions beyond the workplace.

Originality/value: The potential adverse repercussion of Affective Commitment has been barely examined. Additionally, Personal Ethics, when intensified by high Affective Commitment, can lead to extra-role behaviors that transform what is voluntary into a moral imperative. This situation could generate emotional fractures and a decrease in achievement. This perspective, compared to previous research, introduces an innovative element.

Here are some thoughts:

This study investigates the relationship between personal ethics and burnout, highlighting the unexpected mediating influence of affective commitment. While ethical climates have been extensively studied for their impact on emotional well-being, this research focuses on personal ethics as a potential resource for mitigating burnout across its three dimensions. The findings reveal that personal ethics indirectly reduces burnout through its positive association with affective commitment, suggesting that employees with stronger personal ethical values tend to feel more emotionally attached and committed to their organizations, which in turn buffers them against burnout. This research contributes to the understanding of burnout by identifying personal ethics and affective commitment as significant factors in employee well-being.

Saturday, June 14, 2025

Ethical decision-making models: a taxonomy of models and review of issues

Johnson, M. K., Weeks, S. N.,  et al. (2021).
Ethics & Behavior, 32(3), 195–209.

Abstract

A discussion of ethical decision-making literature is overdue. In this article, we summarize the current literature of ethical decision-making models used in mental health professions. Of 1,520 articles published between 2001 and 2020 that met initial search criteria, 38 articles were included. We report on the status of empirical evidence for the use of these models along with comparisons, limitations, and considerations. Ethical decision-making models were synthesized into eight core procedural components and presented based on the composition of steps present in each model. This taxonomy provides practitioners, trainers, students, and supervisors relevant information regarding ethical decision-making models.


Here are some thoughts:

This article reviews ethical decision-making models used in mental health professions and introduces a taxonomy of these models, defined by eight core procedural components. The study analyzed 38 articles published between 2001 and 2020 to identify these components. The eight core components are:   
  1. Framing the Dilemma: This involves identifying and describing the ethical dilemma.
  2. Considering Codes: This includes reviewing relevant ethical codes and legal standards.
  3. Consultation: Seeking advice from supervisors, colleagues, or ethics experts.
  4. Identifying Stakeholders: Recognizing all individuals and parties affected by the decision.
  5. Generating Alternatives: Developing various potential courses of action.
  6. Assessing Consequences: Evaluating the potential outcomes of each alternative.
  7. Making a Decision: Choosing the best course of action.
  8. Evaluating the Outcome: Reflecting on the decision-making process and its results.    
The paper discusses the empirical evidence for the use of these models, their limitations, and other important considerations for practitioners, trainers, students, and supervisors. 

Friday, June 13, 2025

AI Anxiety: a comprehensive analysis of psychological factors and interventions

Kim, J. J. H., Soh, J., et al. (2025).
AI And Ethics.

Abstract

The rapid advancement of artificial intelligence (AI) has raised significant concerns regarding its impact on human psychology, leading to a phenomenon termed AI Anxiety—feelings of apprehension or fear stemming from the accelerated development of AI technologies. Although AI Anxiety is a critical concern, the current literature lacks a comprehensive analysis addressing this issue. This paper aims to fill that gap by thoroughly examining the psychological factors underlying AI Anxiety and proposing effective solutions to tackle the problem. We begin by comparing AI Anxiety with Automation Anxiety, highlighting the distinct psychological impacts associated with AI-specific advancements. We delve into the primary contributor to AI Anxiety—the fear of replacement by AI—and explore secondary causes such as uncontrolled AI growth, privacy concerns, AI-generated misinformation, and AI biases. To address these challenges, we propose multidisciplinary solutions, offering insights into educational, technological, regulatory, and ethical guidelines. Understanding the root causes of AI Anxiety and implementing strategic interventions are critical steps for mitigating its rise as society enters the era of pervasive AI.


Here are some thoughts:

The rapid advancement of artificial intelligence (AI) has led to a growing concern termed "AI Anxiety," which is the apprehension or fear individuals experience due to the fast-paced development of AI technologies.  This anxiety is multifaceted, encompassing fears about job security, privacy infringements, the loss of control over AI systems, and the potential for AI to generate misinformation and exhibit biases.  While AI Anxiety shares similarities with Automation Anxiety, which arose during the Industrial Revolution with the introduction of machinery, it presents unique challenges.  Unlike Automation Anxiety, which was primarily focused on the replacement of manual labor, AI Anxiety extends to the replacement of cognitive and creative skills across various sectors, including healthcare, finance, and education.  The pervasive nature of AI, its integration into personal lives, and the ethical dilemmas it raises contribute to a deeper and more complex form of anxiety. 

Thursday, June 12, 2025

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Parshin, S.,  et al. (n.d.).
Apple.

Abstract

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.

The paper can be located here.

Here are some thoughts:

This paper is important to psychologists because it explores how Large Reasoning Models (LRMs) generate reasoning processes that appear human-like but may lack true understanding—an illusion that mirrors aspects of human cognition. By analyzing LRMs’ step-by-step reasoning traces, the study reveals striking parallels to human reasoning heuristics, biases, and limitations, such as inconsistent logic, computational failures under complexity, and a collapse in effort beyond a certain threshold. These findings offer psychologists a novel framework to compare AI and human reasoning, particularly in domains like problem-solving, metacognition, and cognitive overload. Additionally, the paper raises urgent questions about human-AI interaction: if people overtrust AI-generated reasoning (despite its flaws), this could influence reliance on AI in therapeutic, educational, or decision-making contexts. The study’s methods—using controlled puzzles to dissect reasoning—also provide psychologists with tools to test human cognition with similar precision. Ultimately, this work challenges assumptions about what constitutes "genuine" reasoning, bridging AI research and psychological theories of intelligence, bias, and the boundaries of human and artificial thought.

Wednesday, June 11, 2025

Communitarianism revisited

Etzioni, A. (2014).
Journal of Political Ideologies, 19(3), 241–260.

Abstract

This article provides a retrospective account and analysis of communitarianism. Drawing upon the author's involvement with the political branch of communitarianism, it attempts to summarize both the history of the school of thought as well as its most prominent ideas. These include the communitarian emphasis on the common good; the effort to find an acceptable balance between individual rights and social responsibilities; the basis of social order; and the need to engage in substantive moral dialogues. The article closes with a discussion of cultural relativism according to which communities ought to be the ultimate arbitrators of the good and a universalistic position.


Here are some thoughts:

This article offers a comprehensive overview and critical reflection on the evolution of communitarian thought, particularly as it relates to political philosophy and public life. Etzioni traces the historical roots of communitarianism, highlighting its emphasis on the common good, the balance between individual rights and social responsibilities, and the necessity of substantive moral dialogue within communities. He notes that while communitarianism is a relatively small school in academic philosophy, its core ideas-such as prioritizing the welfare of the community alongside individual freedoms-are deeply embedded in various religious, political, and cultural traditions across the world.

The article explores the resurgence of communitarian ideas in the 1980s and 1990s as a response to the perceived excesses of individualism promoted by liberalism and laissez-faire conservatism. Etzioni discusses the tension between individual autonomy and communal obligations, arguing for a nuanced approach that seeks equilibrium between these often competing values, adapting as societal conditions change. He also addresses critiques of communitarianism, including concerns about its potential association with authoritarianism and the vagueness of the concept of "community."

For practicing psychologists, this article is significant because it underscores the importance of considering both individual and collective dimensions in understanding human behavior, ethical decision-making, and therapeutic practice. Recognizing the interplay between personal autonomy and social context can enhance psychologists’ ability to support clients in navigating moral dilemmas, fostering social connectedness, and promoting well-being within diverse communities.

Tuesday, June 10, 2025

Prejudiced patients: Ethical considerations for addressing patients’ prejudicial comments in psychotherapy.

Mbroh, H., Najjab, A., et al. (2020).
Professional Psychology: Research and
Practice, 51(3), 284–290.

Abstract

Psychologists will often encounter patients who make prejudiced comments during psychotherapy. Some psychologists may argue that the obligations to social justice require them to address these comments. Others may argue that the obligation to promote the psychotherapeutic process requires them to ignore such comments. The authors present a decision-making strategy and an intervention based on principle-based ethics for thinking through such dilemmas.

Public Significance Statement—

This article identifies ethical principles psychologists should consider when deciding whether to address their patients’ prejudicial comments in psychotherapy. It also provides an intervention strategy for addressing patients’ prejudicial comments.


Here are some thoughts:

The article explores how psychologists should ethically respond when clients express prejudicial views during therapy. The authors highlight a tension between two key obligations: the duty to promote the well-being of the patient (beneficence) and the broader responsibility to challenge social injustice (general beneficence). Using principle-based ethics, the article presents multiple real-life scenarios in which clients make discriminatory remarks—whether racist, ageist, sexist, or homophobic—and examines the ethical dilemmas that arise. In each case, psychologists must consider the context, potential harm, and therapeutic alliance before choosing whether or how to intervene. The authors emphasize that while tolerance for clients' values is important, it should not extend to condoning harmful biases. They propose a structured approach to addressing prejudice in session: show empathy, create cognitive dissonance by highlighting harm, and invite the client to explore the issue further. Recommendations include ongoing education, self-reflection, consultation, and thoughtful, non-punitive interventions. Ultimately, the article argues that addressing patient prejudice is ethically justifiable when done skillfully, and doing so can improve both individual therapy outcomes and societal well-being.

Monday, June 9, 2025

No Change? A Grounded Theory Analysis of Depressed Patients' Perspectives on Non-improvement in Psychotherapy

De Smet, M. M., et al. (2019).
Frontiers in Psychology, 10.

Aim: Understanding the effects of psychotherapy is a crucial concern for both research and clinical practice, especially when outcome tends to be negative. Yet, while outcome is predominantly evaluated by means of quantitative pre-post outcome questionnaires, it remains unclear what this actually means for patients in their daily lives. To explore this meaning, it is imperative to combine treatment evaluation with quantitative and qualitative outcome measures. This study investigates the phenomenon of non-improvement in psychotherapy, by complementing quantitative pre-post outcome scores that indicate no reliable change in depression symptoms with a qualitative inquiry of patients' perspectives.

Methods: The study took place in the context of a Randomised Controlled Trial evaluating time-limited psychodynamic and cognitive behavioral therapy for major depression. A mixed methods study was conducted including patients' pre-post outcome scores on the BDI-II-NL and post treatment Client Change Interviews. Nineteen patients whose data showed no reliable change in depression symptoms were selected. A grounded theory analysis was conducted on the transcripts of patients' interviews.

Findings: From the patients' perspective, non-improvement can be understood as being stuck between knowing versus doing, resulting in a stalemate. Positive changes (mental stability, personal strength, and insight) were stimulated by therapy offering moments of self-reflection and guidance, the benevolent therapist approach and the context as important motivations. Remaining issues (ambition to change but inability to do so) were attributed to the therapy hitting its limits, patients' resistance and impossibility and the context as a source of distress. “No change” in outcome scores therefore seems to involve a “partial change” when considering the patients' perspectives.

Conclusion: The study shows the value of integrating qualitative first-person analyses into standard quantitative outcome evaluation and particularly for understanding the phenomenon of non-improvement. It argues for more multi-method and multi-perspective research to gain a better understanding of (negative) outcome and treatment effects. Implications for both research and practice are discussed.

Here are some thoughts:

This study explores the perspectives of depressed patients who experienced no improvement in psychotherapy. While quantitative measures often assess therapy outcomes, the reasons behind a lack of progress from the patients' viewpoint remain unclear. Through a grounded theory analysis, the researchers aimed to understand this phenomenon. The study highlights the importance of considering the patient's subjective experience when evaluating the effectiveness of psychotherapy, particularly in cases where standard outcome measures might not capture the nuances of non-improvement.

Sunday, June 8, 2025

Promoting competent and flourishing life-long practice for psychologists: A communitarian perspective

Wise, E. H., & Reuman, L. (2019).
Professional Psychology Research 
and Practice, 50(2), 129–135.

Abstract

Based on awareness of the challenges inherent in the practice of psychology there is a burgeoning interest in ensuring that psychologists who serve the public remain competent. These challenges include remaining current in our technical skills and maintaining sufficient personal wellness over the course of our careers. However, beyond merely maintaining competence, we encourage psychologists to envision flourishing lifelong practice that incorporates positive relationships, enhancement of meaning, and positive engagement. In this article we provide an overview of the foundational competencies related to professionalism including ethics, reflective practice, self-assessment, and self-care that underlie our ability to effectively apply technical skills in often complex and emotionally challenging relational contexts. Building on these foundational competencies that were initially defined and promulgated for academic training in health service psychology, we provide an initial framework for conceptualizing psychologist well-being and flourishing lifelong practice that incorporates tenets of applied positive psychology, values-based practice, and a communitarian-oriented approach into the following categories: fostering relationships, meaning making and value-based practice, and enhancing engagement. Finally, we propose broad strategies and specific examples intended to leverage current continuing education mandates into a broadly conceived vision of continuing professional development to support enhanced psychologist functioning for lifelong practice.

Here are some thoughts:

Wise and Reuman highlight the importance of lifelong learning for psychologists, emphasizing that competence involves maintaining both technical skills and personal wellness.  The authors introduce a framework that integrates positive psychology, values-based practice, and a communitarian approach, focusing on fostering relationships, enhancing meaning, and promoting engagement.  They stress the significance of foundational competencies such as ethics, reflective practice, self-assessment, and self-care, and advocate for leveraging continuing education mandates to support psychologists' ongoing development and well-being throughout their careers.

Saturday, June 7, 2025

Preventing Veteran Suicide: a landscape analysis of existing programs, their evidence, and what the next generation of programs may look like.

Ramchand, R. et al. (2025, April 16).
RAND.

Preventing veteran suicide is a national priority for government, veteran advocacy groups, and the private sector. This attention has led many individuals and organizations to leverage their expertise to create, expand, or promote activities that they hope will prevent future deaths. While the number and array of diverse approaches reflect a nation committed to a common goal, they also can create confusion. Advances in technology also generate questions about the future of veteran suicide prevention.

In this report, the authors analyze current and emerging activities to prevent veteran suicide. They introduce the RAND Suicide Prevention Activity Matrix, a framework that organizes current approaches, how they complement each other, how they might change, their evidence for preventing veteran suicide, and why they might (or might not) work. This framework places 26 categories of activities in a matrix based on whom the activity targets (the veteran directly, those who regularly interact with the veteran, or social influences) and what the activity is intended to accomplish (address social conditions, promote general well-being, address mental health symptoms, provide mental health supports, and prevent suicide crises). Entities committed to preventing veteran suicide and seeking to design evidence-informed, comprehensive suicide prevention strategies will benefit from the framework and evidence reviewed in this report, in addition to the recommendations the authors developed from these data.

Key Findings
  • The authors identified 307 suicide prevention programs, 156 of which were currently operating and 226 that were proposed to expand existing services or initiate new programs.
  • These organizations' suicide prevention activities were categorized across 26 suicide prevention activity categories and organized into the RAND Suicide Prevention Activity Matrix.
  • Among the 156 current programs, there is a strong focus on those that aim to build social connections and those that offer case management or noncrisis psychological counseling.
  • Veterans are the primary focus of most current programs, but many programs are also offered to family members and friends — often in addition to serving veterans directly.
  • Nonprofit organizations operate most current programs, and just under half of the programs are accessed virtually or via a combination of in-person and virtual access.
  • Among the 226 proposed programs, the most common types are multifunctional digital health platforms (mobile health applications), suicide risk assessment tools, and real-time monitoring.
  • The following activity types have a robust evidence base for preventing suicide: community-based suicide prevention initiatives, suicide risk assessment, noncrisis psychological treatment, crisis psychological clinical services, and pharmacotherapy (for those with mental health conditions).
Recommendations
  • Organizations charged with developing, investing in, implementing, or evaluating comprehensive suicide prevention strategies should prioritize implementation of evidence-based prevention activities.
  • When implementing a suicide prevention activity, organizations should consider the context in which the activity is intended to be delivered.
  • Organizations should conduct a needs assessment to identify gaps in suicide prevention activities.
  • Organizations should apply different thresholds of evidence when considering different suicide prevention activities.
  • Organizations should invest strategically in research that can fill notable gaps in knowledge.