Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, July 18, 2025

Adversarial testing of global neuronal workspace and integrated information theories of consciousness

Ferrante, O., et al,. (2025).
Nature.

Abstract

Different theories explain how subjective experience arises from brain activity. These theories have independently accrued evidence, but have not been directly compared. Here we present an open science adversarial collaboration directly juxtaposing integrated information theory (IIT) and global neuronal workspace theory (GNWT) via a theory-neutral consortium. The theory proponents and the consortium developed and preregistered the experimental design, divergent predictions, expected outcomes and interpretation thereof. Human participants (n = 256) viewed suprathreshold stimuli for variable durations while neural activity was measured with functional magnetic resonance imaging, magnetoencephalography and intracranial electroencephalography. We found information about conscious content in visual, ventrotemporal and inferior frontal cortex, with sustained responses in occipital and lateral temporal cortex reflecting stimulus duration, and content-specific synchronization between frontal and early visual areas. These results align with some predictions of IIT and GNWT, while substantially challenging key tenets of both theories. For IIT, a lack of sustained synchronization within the posterior cortex contradicts the claim that network connectivity specifies consciousness. GNWT is challenged by the general lack of ignition at stimulus offset and limited representation of certain conscious dimensions in the prefrontal cortex. These challenges extend to other theories of consciousness that share some of the predictions tested here. Beyond challenging the theories, we present an alternative approach to advance cognitive neuroscience through principled, theory-driven, collaborative research and highlight the need for a quantitative framework for systematic theory testing and building.

Here are some thoughts:

This research explores a major collaborative effort to empirically test two leading theories of consciousness: Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). These theories represent two of the most prominent perspectives among the more than 200 ideas currently proposed to explain how subjective experience arises from brain activity. GNWT suggests that consciousness occurs when information is globally broadcast across the brain, particularly involving the prefrontal cortex. In contrast, IIT posits that consciousness corresponds to the integration of information in the brain, especially within the posterior cortex.

To evaluate these theories, the Cogitate Consortium organized an “adversarial collaboration,” in which proponents of both theories, along with neutral researchers, agreed on specific, testable predictions derived from each model. IIT predicted that conscious experience should involve sustained synchronization of activity in the posterior cortex, while GNWT predicted that consciousness would involve a “neural ignition” process and that conscious content could be decoded from the prefrontal cortex. These hypotheses were tested across several labs using consistent experimental protocols.

The findings, however, were inconclusive. The data did not reveal the sustained posterior synchronization expected by IIT, nor did it consistently support GNWT’s predictions about prefrontal cortex activity and neural ignition. Although the results presented challenges for both theories, they did not decisively support or refute either one. Importantly, the study marked a significant step forward in the scientific investigation of consciousness. It demonstrated the value of collaborative, theory-neutral research and addressed a long-standing problem in consciousness science—namely, that most studies have been conducted by proponents of specific theories, often resulting in confirmation bias.

The project was also shaped by insights from psychologist Daniel Kahneman, who pioneered the idea of adversarial collaboration. He noted that scientists are rarely persuaded to abandon their theories even in the face of counter-evidence. While this kind of theoretical stubbornness might seem like a flaw, the article argues it can be productive when managed within a collaborative and self-correcting scientific culture. Ultimately, the study underscores how difficult it is to unravel the nature of consciousness and suggests that progress may require both improved experimental methods and potentially a conceptual revolution. Still, by embracing open collaboration, the scientific community has taken a crucial step toward better understanding one of the most complex problems in science.

Thursday, July 17, 2025

Cognitive bias and how to improve sustainable decision making

Korteling, J. E. H., Paradies, G. L., &
Sassen-van Meer, J. P. (2023). 
Frontiers in psychology, 14, 1129835.

Abstract

The rapid advances of science and technology have provided a large part of the world with all conceivable needs and comfort. However, this welfare comes with serious threats to the planet and many of its inhabitants. An enormous amount of scientific evidence points at global warming, mass destruction of bio-diversity, scarce resources, health risks, and pollution all over the world. These facts are generally acknowledged nowadays, not only by scientists, but also by the majority of politicians and citizens. Nevertheless, this understanding has caused insufficient changes in our decision making and behavior to preserve our natural resources and to prevent upcoming (natural) disasters. In the present study, we try to explain how systematic tendencies or distortions in human judgment and decision-making, known as “cognitive biases,” contribute to this situation. A large body of literature shows how cognitive biases affect the outcome of our deliberations. In natural and primordial situations, they may lead to quick, practical, and satisfying decisions, but these decisions may be poor and risky in a broad range of modern, complex, and long-term challenges, like climate change or pandemic prevention. We first briefly present the social-psychological characteristics that are inherent to (or typical for) most sustainability issues. These are: experiential vagueness, long-term effects, complexity and uncertainty, threat of the status quo, threat of social status, personal vs. community interest, and group pressure. For each of these characteristics, we describe how this relates to cognitive biases, from a neuro-evolutionary point of view, and how these evolved biases may affect sustainable choices or behaviors of people. Finally, based on this knowledge, we describe influence techniques (interventions, nudges, incentives) to mitigate or capitalize on these biases in order to foster more sustainable choices and behaviors.

Here are some thoughts:

The article explores why, despite widespread scientific knowledge and public awareness of urgent sustainability issues such as climate change, biodiversity loss, and pollution, there is still insufficient behavioral and policy change to effectively address these problems. The authors argue that cognitive biases--systematic errors in human thinking-play a significant role in hindering sustainable decision--making. These biases evolved to help humans make quick decisions in immediate, simple contexts but are poorly suited for the complex, long-term, and abstract nature of sustainability challenges.

Sustainability issues have several psychological characteristics that make them particularly vulnerable to cognitive biases. These include experiential vagueness, where problems develop slowly and are difficult to perceive directly; long-term effects, where benefits of sustainable actions are delayed while costs are immediate; complexity and uncertainty; threats to the status quo and social standing; conflicts between personal and community interests; and social pressures that discourage sustainable behavior. The article highlights specific cognitive biases linked to these characteristics, such as hyperbolic discounting (the preference for immediate rewards over future benefits), normalcy bias (underestimating the likelihood and impact of disasters), and the tragedy of the commons (prioritizing personal gain over collective welfare), along with others like confirmation bias, the endowment effect, and sunk-cost fallacy, all of which skew judgment and impede sustainable choices.

To address these challenges, the authors recommend interventions that leverage or counteract these biases through environmental and contextual changes rather than solely relying on education or bias training. Techniques such as nudges, incentives, framing effects, and emphasizing benefits to family or in-groups can make sustainable choices easier and more appealing. The key takeaway is that understanding and addressing cognitive biases is essential for improving sustainable decision-making at both individual and policy levels. Policymakers and organizations should design interventions that account for human psychological tendencies to foster more sustainable behaviors effectively.

Wednesday, July 16, 2025

The moral blueprint is not necessary for STEM wisdom

Kachhiyapatel, N., & Grossmann, I. (2025, June 11).
PsyArXiv

Abstract

How can one bring wisdom into STEM education? One popular position holds that wise judgment follows from teaching morals and ethics in STEM. However, wisdom scholars debate the causal role of morality and whether cultivating a moral blueprint is a necessary condition for wisdom. Some philosophers and education scientists champion this view, whereas social psychologists and cognitive scientists argue that moral features like prosocial behavior are reinforcing factors or outcomes of wise judgment rather than pre-requisites. This debate matters particularly for science and technology, where wisdom-demanding decisions typically involve incommensurable values and radical uncertainty. Here, we evaluate these competing positions through four lines of evidence. First, empirical research shows that heightened moralization aligns with foolish rejection of scientific claims, political polarization, and value extremism. Second, economic scholarship on folk theorems demonstrates that wisdom-related metacognition—perspective-integration, context-sensitivity, and balancing long- and short-term goals—can give rise to prosocial behavior without an apriori moral blueprint. Third, in real life moral values often compete, making metacognition indispensable to balance competing interests for the common good. Fourth, numerous scientific domains require wisdom yet operate beyond moral considerations. We address potential objections about immoral and Machiavellian applications of blueprint-free wisdom accounts. Finally, we explore implications for giftedness: what exceptional wisdom looks like in STEM context, and how to train it. Our analysis suggests that STEM wisdom emerges not from prescribed moral codes but from metacognitive skills that enable navigation of complexity and uncertainty.

Here are some thoughts:

This article challenges the idea that wisdom in STEM and other complex domains requires a fixed moral blueprint. Instead, it highlights perspectival metacognition—skills like perspective-taking, intellectual humility, and balancing short- and long-term outcomes—as the core of wise judgment.

For psychologists, this suggests that strong moral convictions alone can sometimes impair wisdom by fostering rigidity or polarization. The findings support a shift in ethics training, supervision, and professional development toward cultivating reflective, context-sensitive thinking. Rather than relying on standardized assessments or fixed values, fostering metacognitive skills may better prepare psychologists and their clients to navigate complex, high-stakes decisions with wisdom and flexibility.

Tuesday, July 15, 2025

Medical AI and Clinician Surveillance — The Risk of Becoming Quantified Workers

Cohen, I. G., Ajunwa, I., & Parikh, R. B. (2025).
New England Journal of Medicine.
Advance online publication.

Here is an excerpt:

There are several ways in which AI-based monitoring tools designed to benefit patients and clinicians might be used for clinician surveillance. First, ambient AI scribe tools, which transcribe and interpret patient and clinician speech to generate a structured note, have been rapidly adopted with a goal of reducing the burden associated with documentation and improving documentation accuracy. But ambient dictation systems introduce new capabilities for monitoring clinicians. By analyzing speech patterns, sentiment, and content, health care systems could use AI scribes to assess how often clinicians’ recommendations deviate from institutional guidelines.

In addition, these systems could detect “efficiency outliers” — clinicians who spend more time conversing with patients than employers consider ideal, at the expense of conducting new-patient visits or more total visits. Ambient monitoring is especially worrisome, given cases of employers terminating the contracts of physicians who didn’t meet visit-time expectations. Akin to automated quality-improvement dashboards for tracking adherence to chronic-disease–management standards, AI models may generate performance scores on the basis of adherence to scripted protocols, average time spent with each patient, or degree of shared decision making, which could be inferred with the use of linguistic analysis. Even if these metrics are established to support quality-improvement goals, hospitals and health care systems could leverage them for evaluations of clinicians or performance-based reimbursement adjustments.

Here are some thoughts:

This article is important to psychologists as it explores the psychological and ethical ramifications of AI-driven surveillance in healthcare, which parallels concerns in mental health practice. The quantification of clinicians through tools like ambient scribes and communication analytics threatens professional autonomy, potentially leading to burnout, stress, and reduced job satisfaction—key areas of study in occupational and health psychology. Additionally, the tension between algorithmic conformity and individualized care mirrors challenges in therapeutic settings, where standardized protocols may conflict with personalized treatment approaches. Psychologists can contribute expertise in human behavior, workplace dynamics, and ethical frameworks to advocate for balanced AI integration that prioritizes clinician well-being and patient-centered care. The article also highlights equity issues, as surveillance may disproportionately affect marginalized clinicians, aligning with psychology’s focus on systemic inequities.

Monday, July 14, 2025

Promises and pitfalls of large language models in psychiatric diagnosis and knowledge tasks

Bang, C.-B., Jung, Y.-C. et al. (2025).
The British Journal of Psychiatry,
226(4), 243–244.

Abstract:

This study evaluates the performance of five large language models (LLMs), including GPT-4, in psychiatric diagnosis and knowledge tasks using a zero-shot approach. Compared to 11 psychiatry residents, GPT-4 demonstrated superior accuracy in diagnostic (F1 score: 63.41% vs. 47.43%) and knowledge tasks (85.05% vs. 62.01%). However, GPT-4 exhibited higher comorbidity error rates (30.48% vs. 0.87%), suggesting limitations in contextual understanding. When residents received GPT-4 guidance, their performance improved significantly without increasing critical errors. The findings highlight the potential of LLMs as clinical aids but underscore the need for careful integration to preserve human expertise and mitigate risks like over-reliance. Future research should compare LLMs with board-certified psychiatrists and explore multifaceted diagnostic frameworks.

Here are some thoughts:

For psychologists, these findings underscore the importance of balancing AI-assisted efficiency with human judgment. While LLMs could serve as valuable training aids or supplemental tools, their limitations emphasize the irreplaceable role of psychologists in interpreting complex patient narratives, cultural factors, and individualized care. Additionally, the study raises ethical considerations about over-reliance on AI, urging psychologists to maintain rigorous critical thinking and therapeutic rapport. Ultimately, this research calls for a thoughtful, evidence-based approach to integrating AI into mental health practice—one that leverages technological advancements while preserving the human elements essential to effective psychological care.

Sunday, July 13, 2025

ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it

Anthony Cuthbertson
The Independent
Originally posted 6 July 25

Here is an excerpt:

“There have already been deaths from the use of commercially available bots,” they noted. “We argue that the stakes of LLMs-as-therapists outweigh their justification and call for precautionary restrictions.”

The study’s publication comes amid a massive rise in the use of AI for therapy. Writing in The Independent last week, psychotherapist Caron Evans noted that a “quiet revolution” is underway with how people are approaching mental health, with artificial intelligence offering a cheap and easy option to avoid professional treatment.

“From what I’ve seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world,” she wrote. “Not by design, but by demand.”

The Stanford study found that the dangers involved with using AI bots for this purpose arise from their tendency to agree with users, even if what they’re saying is wrong or potentially harmful. This sycophancy is an issue that OpenAI acknowledged in a May blog post, which detailed how the latest ChatGPT had become “overly supportive but disingenuous”, leading to the chatbot “validating doubts, fueling anger, urging impulsive decisions, or reinforcing negative emotions”.

While ChatGPT was not specifically designed to be used for this purpose, dozens of apps have appeared in recent months that claim to serve as an AI therapist. Some established organisations have even turned to the technology – sometimes with disastrous consequences. In 2023, the National Eating Disorders Association in the US was forced to shut down its AI chatbot Tessa after it began offering users weight loss advice.


Here are some thoughts:

The article warns that AI chatbots like ChatGPT are increasingly being used for mental health support, often with dangerous consequences. A Stanford study found that these chatbots can validate harmful thoughts, reinforce negative emotions, and provide unsafe information—escalating crises like suicidal ideation, mania, and psychosis. Real-world cases include a Florida man with schizophrenia who became obsessed with an AI-generated persona and later died in a police confrontation. Experts warn of a phenomenon called “chatbot psychosis,” where AI interactions intensify delusions in vulnerable individuals. Despite growing awareness, OpenAI has not adequately addressed the risks, and researchers call for urgent restrictions on using AI as a therapeutic tool. While companies like Meta see AI as the future of mental health care, critics stress that more data alone won't solve the problem, and current safeguards are insufficient.

Saturday, July 12, 2025

Brain-Inspired Affective Empathy Computational Model and Its Application on Altruistic Rescue Task

Feng, H., Zeng, Y., & Lu, E. (2022).
Frontiers in computational neuroscience,
16, 784967.

Abstract

Affective empathy is an indispensable ability for humans and other species' harmonious social lives, motivating altruistic behavior, such as consolation and aid-giving. How to build an affective empathy computational model has attracted extensive attention in recent years. Most affective empathy models focus on the recognition and simulation of facial expressions or emotional speech of humans, namely Affective Computing. However, these studies lack the guidance of neural mechanisms of affective empathy. From a neuroscience perspective, affective empathy is formed gradually during the individual development process: experiencing own emotion-forming the corresponding Mirror Neuron System (MNS)-understanding the emotions of others through the mirror mechanism. Inspired by this neural mechanism, we constructed a brain-inspired affective empathy computational model, this model contains two submodels: (1) We designed an Artificial Pain Model inspired by the Free Energy Principle (FEP) to the simulate pain generation process in living organisms. (2) We build an affective empathy spiking neural network (AE-SNN) that simulates the mirror mechanism of MNS and has self-other differentiation ability. We apply the brain-inspired affective empathy computational model to the pain empathy and altruistic rescue task to achieve the rescue of companions by intelligent agents. To the best of our knowledge, our study is the first one to reproduce the emergence process of mirror neurons and anti-mirror neurons in the SNN field. Compared with traditional affective empathy computational models, our model is more biologically plausible, and it provides a new perspective for achieving artificial affective empathy, which has special potential for the social robots field in the future.

Here are some thoughts:

This article is significant because it highlights a growing effort to imbue machines with complex human-like experiences and behaviors, such as pain and altruism—traits that are deeply rooted in human psychology and evolution. By attempting to program pain, researchers are not merely simulating a sensory reaction but exploring how discomfort or negative feedback might influence learning, decision-making, and self-preservation in AI systems.

This has profound psychological implications, as it touches on how emotions and aversive experiences shape behavior and consciousness in humans. Similarly, programming altruism raises questions about the nature of empathy, cooperation, and moral reasoning—core areas of interest in social and cognitive psychology. Understanding how these traits can be modeled in AI helps psychologists explore the boundaries of machine autonomy, ethical behavior, and the potential consequences of creating entities that mimic human emotional and moral capacities. The broader implication is that this research challenges traditional psychological concepts of mind, consciousness, and ethics, while also prompting critical discussions about how such AI systems might interact with and influence human societies in the future.

Friday, July 11, 2025

Artificial intelligence in psychological practice: Applications, ethical considerations, and recommendations

Hutnyan, M., & Gottlieb, M. C. (2025).
Professional Psychology: Research and Practice.
Advance online publication.

Abstract

Artificial intelligence (AI) systems are increasingly relied upon in the delivery of health care services traditionally provided solely by humans, and the widespread use of AI in the routine practice of professional psychology is on the horizon. It is incumbent on practicing psychologists to be prepared to effectively implement AI technologies and engage in thoughtful discourse regarding the ethical and responsible development, implementation, and regulation of these technologies. This article provides a brief overview of what AI is and how it works, a description of its current and potential future applications in professional practice, and a discussion of the ethical implications of using AI systems in the delivery of psychological services. Applications of AI technologies in key areas of clinical practice are addressed, including assessment and intervention. Using the Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017) as a framework, anticipated ethical challenges across five domains—harm and nonmaleficence, autonomy and informed consent, fidelity and responsibility, privacy and confidentiality, and bias, respect, and justice—are discussed. Based on these challenges, provisional recommendations for psychologists are provided.

Impact Statement

This article provides an overview of artificial intelligence (AI) and how it works, describes current and developing applications of AI in the practice of professional psychology, and explores the potential ethical challenges of using these technologies in the delivery of psychological services. The use of AI in professional psychology has many potential benefits, but it also has drawbacks; ethical psychologists are wise to carefully consider their use of AI in practice.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.

Wednesday, July 9, 2025

Management of Suicidal Thoughts and Behaviors in Youth. Systematic Review

Sim L, Wang Z, et al (2025).
Prepared by the Mayo Clinic Evidence-based 
Practice Center under

Abstract

Background: Suicide is a leading cause of death in young people and an escalating public health crisis. We aimed to assess the effectiveness and harms of available treatments for suicidal thoughts and behaviors in youths at heightened risk for suicide. We also aimed to examine how social determinants of health, racism, disparities, care delivery methods, and patient demographics affect outcomes.

Methods: We conducted a systematic review and searched several databases including MEDLINE®, Embase®, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and others from January 2000 to September 2024. We included randomized clinical trials (RCTs), comparative observational studies, and before-after studies of psychosocial interventions, pharmacological interventions, neurotherapeutics, emerging therapies, and combinations therapies. Eligible patients were youths (aged 5 to 24 years) who had a heightened risk for suicide, including youths who have experienced suicidal ideation, prior attempts, hospital discharge for mental health treatment, or command hallucinations; were identified as high risk on validated questionnaires; or were from other at-risk groups. Pairs of independent reviewers selected and appraised studies. Findings were synthesized narratively.

Results: We included 65 studies reporting on 14,534 patients (33 RCTs, 13 comparative observational studies, and 19 before-after studies). Psychosocial interventions identified from the studies comprised psychotherapy interventions (33 studies, Cognitive Behavior Therapy, Dialectical Behavior Therapy, Collaborative Assessment and Management of Suicidality, Dynamic Deconstructive Psychotherapy, Attachment-Based Family Therapy, and Family-Focused Therapy), acute (i.e., 1 to 4 sessions/contacts) psychosocial interventions (19 studies, acute safety planning, family-based crisis management, motivational interviewing crisis interventions, continuity of care following crisis, and brief adjunctive treatments), and school/community-based psychosocial interventions (13 studies, social network interventions, school-based skills interventions, suicide awareness/gatekeeper programs, and community-based, culturally tailored adjunct programs). For most categories of psychotherapies (except DBT), acute interventions, or school/community-based interventions, there was insufficient strength of evidence and uncertainty about suicidal thoughts or attempts. None of the studies evaluated adverse events associated with the interventions. The evidence base on pharmacological treatment for suicidal youths was largely nonexistent at the present time. No eligible study evaluated neurotherapeutics or emerging therapies.

Conclusion: The current evidence on available interventions intended for youths at heightened risk of suicide is uncertain. Medication, neurotherapeutics, and emerging therapies remain unstudied in this population. Given that most treatments were adapted from adult protocols that may not fit the developmental and contextual experience of adolescents or younger children, this limited evidence base calls for the development of novel, developmentally and trauma-informed treatments, as well as multilevel interventions to address the rising suicide risk in youths.

Tuesday, July 8, 2025

Behavioral Ethics: Ethical Practice Is More Than Memorizing Compliance Codes

Cicero F. R. (2021).
Behavior analysis in practice, 14(4), 
1169–1178.

Abstract

Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.

Here are some thoughts:

This article argues that ethical practice requires more than memorizing compliance codes, as professionals aware of such codes still commit ethical breaches. Behavioral ethics suggests that environmental and situational variables often evoke and maintain unethical decisions, conceptualizing these decisions as operant behavior. Thus, knowledge of ethical codes alone is insufficient to prevent unethical practice; an assessment of environmental influences is necessary. The paper translates behavioral ethics constructs like self-serving bias, incrementalism, framing, obedience to authority, conformity bias, and overconfidence bias into behavior-analytic principles such as reinforcement, shaping, motivating operations, and stimulus control. This perspective shifts the focus from blaming individuals towards analyzing environmental factors that prompt ethical breaches, advocating for proactive assessment to support ethical behavior.

Understanding these concepts is vital for psychologists because they too are subject to environmental pressures that can lead to unethical actions, despite ethical training. The article highlights that ethical knowledge does not always translate to ethical behavior, emphasizing that situational factors often play a more significant role. Psychologists must recognize subtle influences such as the gradual normalization of unethical actions (incrementalism), the impact of how situations are described (framing), pressures from authority figures, and conformity to group norms, as these can all compromise ethical judgment. An overconfidence in one's own ethical standing can further obscure these influences. By applying a behavior-analytic lens, psychologists can better identify and mitigate these environmental risks, fostering a culture of proactive ethical assessment within their practice and institutions to safeguard clients and the profession.

Monday, July 7, 2025

Subconscious Suggestion

Ferketic, M. (2025, Forthcoming)  

Abstract

Subconscious suggestion is a silent but pervasive force shaping perception, decision-making, and attentional structuring beneath awareness. Operating as internal impressive action, it passively introduces impulses, biases, and associative framings into consciousness, subtly guiding behavior without volitional approval. Like hypnotic suggestion, it does not dictate action; it attempts to compel through motivational pull, influencing perception and intent through saliency and potency gradients. Unlike previous theories that depict subconscious influence as abstract or deterministic, this work presents a novel structured, mechanistic, operational model of function, demonstrating from first principles how subconscious suggestion disperses influence into awareness, interacts with attentional deployment, and negotiates attentional sovereignty. Additionally, it frames free will not as exemption from subconscious force, but as mastery of its regulation, with autonomy emerging from the ability to recognize, refine, and command suggestive forces rather than be unconsciously governed by them.

Here are some thoughts:

Subconscious suggestion, as detailed in the article, is a fundamental cognitive mechanism that shapes perception, attention, and behavior beneath conscious awareness. It operates as internal impressive action—passively introducing impulses, biases, and associative framings into consciousness, subtly guiding decisions without direct volitional control. Unlike deterministic models of unconscious influence, this framework presents subconscious suggestion as a structured, mechanistic process that competes for attention through saliency and motivational potency gradients. It functions much like a silent internal hypnotist, not dictating action but attempting to compel through perceptual framing and emotional nudges.

For practicing psychologists, understanding this model is crucial—it provides insight into how automatic cognitive processes contribute to habit formation, emotional regulation, motivation, and decision-making. It reframes free will not as exemption from subconscious forces, but as mastery over them, emphasizing the importance of attentional sovereignty and volitional override in clinical interventions. This knowledge equips psychologists to better identify, assess, and guide clients in managing subconscious influences, enhancing therapeutic outcomes across conditions such as addiction, anxiety, compulsive behaviors, and maladaptive thought patterns.

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.

Saturday, July 5, 2025

Bias Is Not Color Blind: Ignoring Gender and Race Leads to Suboptimal Selection Decisions.

Rabinovitch, H. et al. (2025, May 27).

Abstract

Blindfolding—selecting candidates based on objective selection tests while avoiding personal information about their race and gender— is commonly used to mitigate bias in selection. Selection tests, however, often benefit people of a certain race or gender. In such cases, selecting the best candidates requires incorporating, rather than ignoring, the biasing factor. We examined people's preference for avoiding candidates’ race and gender, even when fully aware that these factors bias the selection test. We put forward a novel prediction suggesting that paradoxically, due to their fear of appearing partial, people would choose not to reveal race and gender information, even when doing so means making suboptimal decisions. Across three experiments (N = 3,621), hiring professionals (and laypeople) were tasked with selecting the best candidate for a position when they could reveal the candidate’s race and gender or avoid it. We further measured how fear for their social image corresponds with their decision, as well as how job applicants perceive such actions. The results supported our predictions, showing that more than 50% did not reveal gender and race information, compared to only 30% who did not reveal situational biasing information, such as the time of day in which the interview was held. Those who did not reveal information expressed higher concerns for their social and self-image than those who decided to reveal. We conclude that decision-makers avoid personal biasing information to maintain a positive image, yet by doing so, they compromise fairness and accuracy alike.

Public significance statements

Blindfolding—ignoring one’s gender and race in selection processes—is a widespread strategy aimed at reducing bias and increasing diversity. Selection tests, however, often unjustly benefit members of certain groups, such as men and white people. In such cases, correcting the bias requires incorporating, rather than ignoring, information about the candidates’ gender and race. The current research shows that decision-makers are reluctant to reveal such information due to their fear of appearing partial. Paradoxically, decision-makers avoid such information, even when fully aware that doing so may perpetuate bias, in order to protect their social image as impartial, but miss out on the opportunity to advance fairness and choose the best candidates.

Here are some thoughts:

This research is critically important to practicing psychologists because it sheds light on the complex interplay between bias, decision-making, and social image concerns in hiring processes. The study demonstrates how well-intentioned practices like "blindfolding"—omitting race or gender information to reduce discrimination—can paradoxically perpetuate systemic biases when selection tools themselves are flawed. Practicing psychologists must understand that ignoring personal attributes does not eliminate bias but can instead obscure its effects, leading to suboptimal and unfair outcomes. By revealing how decision-makers avoid sensitive information out of fear of appearing partial, the research highlights the psychological mechanisms—such as social and self-image concerns—that drive this avoidance. This insight is crucial for psychologists involved in organizational consulting, personnel training, or policy development, as it underscores the need for more nuanced strategies that address bias directly rather than avoiding it.

Additionally, the findings inform interventions aimed at promoting diversity, equity, and inclusion by showing that transparency and informed adjustments based on demographic factors may be necessary to achieve fairer outcomes. Ultimately, the research challenges traditional assumptions about neutrality in selection decisions and urges psychologists to advocate for evidence-based approaches that actively correct for bias while considering the broader implications of perceived fairness and merit.

Friday, July 4, 2025

The Psychology of Moral Conviction

Skitka, L. J.,  et al. (2020).
Annual Review of Psychology, 72(1),
347–366.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

Here are some thoughts:

The article provides valuable insights into how individuals perceive and process attitudes grounded in fundamental beliefs about right and wrong. It distinguishes morally convicted attitudes from other constructs, such as strong but nonmoral attitudes or religious beliefs, by highlighting that moral convictions are viewed as universally and objectively true and are relatively resistant to authority or peer influence. These convictions often lead to significant social and political consequences, including intolerance of differing views, resistance to compromise, increased political engagement, and heightened emotional responses. The article also explores the processes of attitude moralization—how an issue becomes infused with moral significance—and demoralization, offering a domain theory of attitude moralization that suggests different pathways depending on whether the initial attitude is perceived as a preference, convention, or existing moral imperative.

This knowledge is critically important to practicing psychologists because it enhances their understanding of how moral convictions shape behavior, decision-making, and interpersonal dynamics. For instance, therapists working with clients on issues involving conflict resolution, values clarification, or behavioral change must consider the role of moral conviction in shaping resistance to persuasion or difficulty in compromising. Understanding moral conviction can also aid psychologists in navigating cultural differences, addressing polarization in group settings, and promoting tolerance by recognizing how individuals intuitively perceive certain issues as moral. Furthermore, as society grapples with increasingly divisive sociopolitical challenges—such as climate change, immigration, and public health crises—psychologists can use these insights to foster dialogue, reduce moral entrenchment, and encourage constructive engagement. Ultimately, integrating the psychology of moral conviction into practice allows for more nuanced, empathetic, and effective interventions across clinical, organizational, and community contexts.

Thursday, July 3, 2025

Mindfulness, moral reasoning and responsibility: towards virtue in Ethical Decision-Making.

Small, C., & Lew, C. (2019).
Journal of Business Ethics, 169(1),
103–117.

Abstract

Ethical decision-making is a multi-faceted phenomenon, and our understanding of ethics rests on diverse perspectives. While considering how leaders ought to act, scholars have created integrated models of moral reasoning processes that encompass diverse influences on ethical choice. With this, there has been a call to continually develop an understanding of the micro-level factors that determine moral decisions. Both rationalist, such as moral processing, and non-rationalist factors, such as virtue and humanity, shape ethical decision-making. Focusing on the role of moral judgement and moral intent in moral reasoning, this study asks what bearings a trait of mindfulness and a sense of moral responsibility may have on this process. A survey measuring mindfulness, moral responsibility and moral judgement completed by 171 respondents was used for four hypotheses on moral judgement and intent in relation to moral responsibility and mindfulness. The results indicate that mindfulness predict moral responsibility but not moral judgement. Moral responsibility does not predict moral judgement, but moral judgement predicts moral intent. The findings give further insight into the outcomes of mindfulness and expand insights into the models of ethical decision-making. We offer suggestions for further research on the role of mindfulness and moral responsibility in ethical decision-making.

Here are some thoughts:

This research explores the interplay between mindfulness, moral reasoning, and moral responsibility in ethical decision-making. Drawing on Rest’s model of moral reasoning—which outlines four phases (awareness, judgment, intent, and behavior)—the study investigates how mindfulness as a virtue influences these stages, particularly moral judgment and intent, and how it relates to a sense of moral responsibility. Regression analyses revealed that while mindfulness did not directly predict moral judgment, it significantly predicted moral responsibility. Additionally, moral judgment was found to strongly predict moral intent.

For practicing psychologists, this study is important for several reasons. First, it highlights the potential role of mindfulness as a trait linked to moral responsibility, suggesting that cultivating mindfulness may enhance ethical decision-making by fostering a greater sense of accountability toward others. This has implications for ethics training and professional development in psychology, especially in fields where practitioners face complex moral dilemmas. Second, the findings underscore the importance of integrating non-rationalist factors—such as virtues and emotional awareness—into traditional models of moral reasoning, offering a more holistic understanding of ethical behavior. Third, the research supports the use of scenario-based approaches in training professionals to navigate real-world ethical challenges, emphasizing the contextual nature of moral reasoning. Finally, the paper contributes to the broader literature on mindfulness by linking it to prosocial behaviors and ethical outcomes, which can inform therapeutic practices aimed at enhancing clients’ moral self-awareness and responsible decision-making.

Wednesday, July 2, 2025

Realization of Empathy Capability for the Evolution of Artificial Intelligence Using an MXene(Ti3C2)-Based Memristor

Wang, Y., Zhang, Y., et al. (2024).
Electronics, 13(9), 1632.

Abstract

Empathy is the emotional capacity to feel and understand the emotions experienced by other human beings from within their frame of reference. As a unique psychological faculty, empathy is an important source of motivation to behave altruistically and cooperatively. Although human-like emotion should be a critical component in the construction of artificial intelligence (AI), the discovery of emotional elements such as empathy is subject to complexity and uncertainty. In this work, we demonstrated an interesting electrical device (i.e., an MXene (Ti3C2) memristor) and successfully exploited the device to emulate a psychological model of “empathic blame”. To emulate this affective reaction, MXene was introduced into memristive devices because of its interesting structure and ionic capacity. Additionally, depending on several rehearsal repetitions, self-adaptive characteristic of the memristive weights corresponded to different levels of empathy. Moreover, an artificial neural system was designed to analogously realize a moral judgment with empathy. This work may indicate a breakthrough in making cool machines manifest real voltage-motivated feelings at the level of the hardware rather than the algorithm.

Here are some thoughts:

This research represents a critical step toward endowing machines with human-like emotional capabilities, particularly empathy. Traditionally, AI has been limited to algorithmic decision-making and pattern recognition, lacking the nuanced ability to understand or simulate human emotions. By using an MXene-based memristor to emulate "empathic blame," researchers have demonstrated a hardware-level mechanism that mimics how humans adjust their moral judgments based on repeated exposure to similar situations—an essential component of empathetic reasoning. This breakthrough suggests that future AI systems could be designed not just to recognize emotions but to adaptively respond to them in real time, potentially leading to more socially intelligent machines.

For psychologists, this research raises profound questions about the nature of empathy, its role in moral judgment, and whether artificially created systems can truly embody these traits or merely imitate them. The ability to program empathy into AI could change how we conceptualize machine sentience and emotional intelligence, blurring the lines between biological and artificial cognition. Furthermore, as AI becomes more integrated into social, therapeutic, and even judicial contexts, understanding how machines might "feel" or interpret human suffering becomes increasingly relevant. The study also opens up new interdisciplinary dialogues between neuroscience, ethics, and AI development, emphasizing the importance of considering psychological principles in the design of emotionally responsive technologies. Ultimately, this work signals a shift from purely functional AI toward systems capable of engaging with humans on a deeper, more emotionally resonant level.

Tuesday, July 1, 2025

The Advantages of Human Evolution in Psychotherapy: Adaptation, Empathy, and Complexity

Gavazzi, J. (2025, May 24).
On Board with Professional Psychology.
American Board of Professional Psychology.
Issues 5.

Abstract

The rapid advancement of artificial intelligence, particularly Large Language Models (LLMs), has generated significant concern among psychologists regarding potential impacts on therapeutic practice. 

This paper examines the evolutionary advantages that position human psychologists as irreplaceable in psychotherapy, despite technological advances. Human evolution has produced sophisticated capacities for genuine empathy, social connection, and adaptive flexibility that are fundamental to effective therapeutic relationships. These evolutionarily-derived abilities include biologically-rooted emotional understanding, authentic empathetic responses, and the capacity for nuanced, context-dependent decision-making. In contrast, LLMs lack consciousness, genuine emotional experience, and the evolutionary framework necessary for deep therapeutic insight. While LLMs can simulate empathetic responses through linguistic patterns, they operate as statistical models without true emotional comprehension or theory of mind. The therapeutic alliance, cornerstone of successful psychotherapy, depends on authentic human connection and shared experiential understanding that transcends algorithmic processes. Human psychologists demonstrate adaptive complexity in understanding attachment styles, trauma responses, and individual patient needs that current AI cannot replicate.

The paper concludes that while LLMs serve valuable supportive roles in documentation, treatment planning, and professional reflection, they cannot replace the uniquely human relational and interpretive aspects essential to psychotherapy. Psychologists should integrate these technologies as resources while maintaining focus on the evolutionarily-grounded human capacities that define effective therapeutic practice.

Monday, June 30, 2025

Neural Processes Linking Interoception to Moral Preferences Aligned with Group Consensus

Kim, J., & Kim, H. (2025).
Journal of Neuroscience, e1114242025.

Abstract

Aligning one’s decisions with the prevailing norms and expectations of those around us constitutes a fundamental facet of moral decision-making. When faced with conflicting moral values, one adaptive approach is to rely on intuitive moral preference. While there has been theoretical speculation about the connection between moral preference and an individual’s awareness of introspective interoceptive signals, it has not been empirically examined. This study examines the relationships between individuals’ preferences in moral dilemmas and interoception, measured with self-report, heartbeat detection task, and resting-state fMRI. Two independent experiments demonstrate that both male and female participants’ interoceptive awareness and accuracy are associated with their moral preferences aligned with group consensus. In addition, the fractional occupancies of the brain states involving the ventromedial prefrontal cortex and the precuneus during rest mediate the link between interoceptive awareness and the degree of moral preferences aligned to group consensus. These findings provide empirical evidence of the neural mechanism underlying the link between interoception and moral preferences aligned with group consensus.

Significance statement

We investigate the intricate link between interoceptive ability to perceive internal bodily signals and decision-making when faced with moral dilemmas. Our findings reveal a significant correlation between the accuracy and awareness of interoceptive signals and the degree of moral preferences aligned with group consensus. Additionally, brain states involving the ventromedial prefrontal cortex and precuneus during rest mediate the link between interoceptive awareness and moral preferences aligned with group consensus. These findings provide empirical evidence that internal bodily signals play a critical role in shaping our moral intuitions according to others’ expectations across various social contexts.

Here are some thoughts:

A recent study highlighted that our moral decisions may be influenced by our body's internal signals, particularly our heartbeat. Researchers found that individuals who could accurately perceive their own heartbeats tended to make moral choices aligning with the majority, regardless of whether those choices were utilitarian or deontological. This implies that bodily awareness might unconsciously guide us toward socially accepted norms. Brain scans supported this, showing increased activity in areas associated with evaluation and judgment, like the medial prefrontal cortex, in those more attuned to their internal signals. While the study's participants were exclusively Korean college students, limiting generalizability, the findings open up intriguing possibilities about the interplay between bodily awareness and moral decision-making.

Sunday, June 29, 2025

Whistle-blowers – morally courageous actors in health care?

Wiisak, J., Suhonen, R., & Leino-Kilpi, H. (2022).
Nursing Ethics, 29(6), 1415–1429.

Abstract
Background

Moral courage means courage to act according to individual’s own ethical values and principles despite the risk of negative consequences for them. Research about the moral courage of whistle-blowers in health care is scarce, although whistleblowing involves a significant risk for the whistle-blower.

Objective
To analyse the moral courage of potential whistle-blowers and its association with their background variables in health care.

Research design
Was a descriptive-correlational study using a questionnaire, containing Nurses Moral Courage Scale©, a video vignette of the wrongdoing situation with an open question about the vignette, and several background variables. Data were analysed statistically and inductive content analysis was used for the narratives.

Participants and research context
Nurses as healthcare professionals (including registered nurses, public health nurses, midwives, and nurse paramedics) were recruited from the membership register of the Nurses’ Association via email in 2019. A total of 454 nurses responded. The research context was simulated using a vignette.

Ethical considerations
Good scientific inquiry guidelines were followed. Permission to use the Nurses’ Moral Courage Scale© was obtained from the copyright holder. The ethical approval and permission to conduct the study were obtained from the participating university and the Nurses’ Association.

Findings
The mean value of potential whistle-blowers’ moral courage on a Visual Analogue Scale (0–10) was 8.55 and the mean score was 4.34 on a 5-point Likert scale. Potential whistle-blowers’ moral courage was associated with their socio-demographics, education, work, personality and social responsibility related background variables.

Discussion and conclusion
In health care, potential whistle-blowers seem to be quite morally courageous actors. The results offer opportunities for developing interventions, practices and education to support and encourage healthcare professionals in their whistleblowing. Research is needed for developing a theoretical construction to eventually increase whistleblowing and decrease and prevent wrongdoing.

Here are some thoughts:

This study investigates the moral courage of healthcare professionals in whistleblowing scenarios. Utilizing a descriptive-correlational design, the researchers surveyed 454 nurses—including registered nurses, public health nurses, midwives, and nurse paramedics—using the Nurses' Moral Courage Scale, a video vignette depicting a wrongdoing situation, and open-ended questions. Findings revealed a high level of moral courage among participants, with an average score of 8.55 on a 0–10 Visual Analogue Scale and 4.34 on a 5-point Likert scale. The study identified associations between moral courage and various background factors such as socio-demographics, education, work experience, personality traits, and social responsibility. The authors suggest that these insights can inform the development of interventions and educational programs to support and encourage whistleblowing in healthcare settings, ultimately aiming to reduce and prevent unethical practices

Saturday, June 28, 2025

An Update on Psychotherapy for the Treatment of PTSD

Rothbaum, B. O., & Watkins, L. E. (2025).
American Journal of Psychiatry, 182(5), 424–437.

Abstract

Posttraumatic stress disorder (PTSD) symptoms are part of the normal response to trauma. Most trauma survivors will recover over time without intervention, but a significant minority will develop chronic PTSD, which is unlikely to remit without intervention. Currently, only two medications, sertraline and paroxetine, are approved by the U.S. Food and Drug Administration to treat PTSD, and the combination of brexpiprazole and sertraline and MDMA-assisted therapy have FDA applications pending. These medications, and the combination of pharmacotherapy and psychotherapy, are not recommended as first-line treatments in any published PTSD treatment guidelines. The only interventions recommended as first-line treatments are trauma-focused psychotherapies; the U.S. Department of Veterans Affairs/Department of Defense PTSD treatment guideline recommends prolonged exposure (PE), cognitive processing therapy (CPT), and eye movement desensitization and reprocessing, and the American Psychological Association PTSD treatment guideline recommends PE, CPT, cognitive therapy, and trauma-focused cognitive-behavioral therapy. Although published clinical trials of psychedelic-assisted psychotherapy have not incorporated evidence-based PTSD psychotherapies, they have achieved greater response rates than other trials of combination treatment, and there is some enthusiasm about combining psychedelic medications with evidence-based psychotherapies. The state-of-the-art PTSD psychotherapies are briefly reviewed here, including their effects on clinical and neurobiological measures.

The article is paywalled, unfortuantely.

Here is a summary and some thoughts.

In the evolving landscape of PTSD treatment, Rothbaum and Watkins reaffirm a crucial truth: trauma-focused psychotherapies remain the first-line, evidence-based interventions for posttraumatic stress disorder (PTSD), outperforming pharmacological approaches in both efficacy and durability.

The State of PTSD Treatment
While most individuals naturally recover from trauma, a significant minority develop chronic PTSD, which typically requires intervention. Current FDA-approved medications for PTSD—sertraline and paroxetine—offer only modest relief, and recent psychedelic-assisted therapy trials, though promising, have not yet integrated evidence-based psychotherapy approaches. As such, expert guidelines consistently recommend trauma-focused psychotherapies as first-line treatments.

Evidence-Based Therapies at the Core
The VA/DoD and APA guidelines converge on recommending prolonged exposure (PE) and cognitive processing therapy (CPT), with eye movement desensitization and reprocessing (EMDR), cognitive therapy, and trauma-focused CBT also strongly supported.

PE helps patients systematically confront trauma memories and triggers to promote extinction learning. Its efficacy is unmatched, with robust support from meta-analyses and neurobiological studies.

CPT targets maladaptive beliefs that develop after trauma, helping patients reframe distorted thoughts through cognitive restructuring.

EMDR, though somewhat controversial, remains a guideline-supported approach and continues to show effectiveness in trials.

Neurobiological Insights
Modern neuroscience supports these therapies: PTSD involves hyperactivation of fear and salience networks (e.g., amygdala) and underactivation of emotion regulation circuits (e.g., prefrontal cortex). Successful treatment—especially exposure-based therapy—enhances extinction learning and improves functional connectivity in these circuits. Moreover, cortisol patterns, genetic markers, and cardiovascular reactivity are emerging as potential predictors of treatment response.

Innovations and Expansions
Therapists are increasingly utilizing massed formats (e.g., daily sessions over 2 weeks), virtual reality exposure therapy, and early interventions in emergency settings. These models show high completion rates and comparable outcomes to traditional weekly formats.

One particularly innovative direction involves MDMA-assisted psychotherapy. Although still investigational, trials show higher remission rates when MDMA is paired with psychotherapy. The METEMP protocol (MDMA-enhanced PE) offers a translational model that integrates the strengths of both approaches.

Addressing Clinical Challenges
High dropout rates (27–50%) remain a concern, largely due to avoidance—a core PTSD symptom. Massed therapy formats have demonstrated improved retention. Additionally, comorbid conditions (e.g., depression, TBI, substance use) generally do not impede response to trauma-focused care and can be concurrently treated using integrated protocols like COPE (Concurrent Treatment of PTSD and Substance Use Disorders Using PE).

Toward Greater Access and Remission
Despite strong evidence, access to high-quality trauma-focused therapy remains limited outside military and VA systems. Telehealth, stepped care models, and broader dissemination of evidence-based practices are key to closing this gap.

Finally, Rothbaum and Watkins argue that remission—not just symptom reduction—must be the treatment goal. With renewed scientific rigor and integrative innovations like MDMA augmentation, the field is inching closer to more effective and enduring treatments.

Friday, June 27, 2025

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

Kosmyna, N. K. et al. (2025).

Abstract

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.


Here are some thoughts:

This research is important for psychologists because it provides empirical evidence on how using large language models (LLMs) like ChatGPT, traditional search engines, or relying solely on one’s own cognition affects cognitive engagement, neural connectivity, and perceived ownership during essay writing tasks. The study used EEG to measure brain activity and found that participants who wrote essays unaided (Brain-only group) exhibited the highest neural connectivity and cognitive engagement, while those using LLMs showed the weakest. Notably, repeated LLM use led to reduced memory recall, lower perceived ownership of written work, and diminished ability to quote from their own essays, suggesting a measurable cognitive cost and potential decrease in learning skills. The findings highlight that while LLMs can provide immediate benefits, their use may undermine deeper learning and engagement, which has significant implications for educational practices and the integration of AI tools in learning environments.

Thursday, June 26, 2025

A Modular Spiking Neural Network-Based Neuro-Robotic System for Exploring Embodied Intelligence

Chen, Z., Sun, T., et al. (2024). 
2022 International Conference on
Advanced Robotics and Mechatronics (ICARM)
1093–1098.

Abstract

Bio-inspired construction of modular biological neural networks (BNNs) is gaining attention due to their innate stable inter-modular signal transmission ability, which is thought to underlying the emergence of biological intelligence. However, the complicated, laborious fabrication of BNNs with structural and functional connectivity of interest in vitro limits the further exploration of embodied intelligence. In this work, we propose a modular spiking neural network (SNN)-based neuro-robotic system by concurrently running SNN modeling and robot simulation. We show that the modeled mSNNs present complex calcium dynamics resembling mBNNs. In particular, spontaneous periodic network-wide bursts were observed in the mSNN, which could be further suppressed partially or completely with global chemical modulation. Moreover, we demonstrate that after complete suppression, intermodular signal transmission can still be evoked reliably via local stimulation. Therefore, the modeled mSNNs could either achieve reliable trans-modular signal transmission or add adjustable false-positive noise signals (spontaneous bursts). By interconnecting the modeled mSNNs with the simulated mobile robot, active obstacle avoidance and target tracking can be achieved. We further show that spontaneous noise impairs robot performance, which indicates the importance of suppressing spontaneous burst activities of modular networks for the reliable execution of robot tasks. The proposed neuro-robotic system embodies spiking neural networks with a mobile robot to interact with the external world, which paves the way for exploring the arising of more complex biological intelligence.

Here are some thoughts:

This paper is pretty wild. These researchers wanted to create an AI that simulates human brain activity embodied within a simulated mobile robot. The AI simulates calcium spiking in the brain, and the AI modules apparently communicate with each other. Quieting the spiking made the AI simulated robotic system more efficient. Here are some thoughts:

Cognitive neuroscience seeks to uncover how neural activity gives rise to perception, decision-making, and behavior, often by studying the dynamics of brain networks. This research contributes significantly to that goal by modeling modular spiking neural networks (mSNNs) that replicate key features of biological neural networks, including spontaneous network bursts and inter-modular communication. These modeled networks demonstrate how structured neural activity can support reliable signal transmission, a fundamental aspect of cognitive processing. Importantly, they also allow for controlled manipulation of network states—such as through global chemical modulation—which provides a way to study how noise or spontaneous activity affects information processing.

From an ethical standpoint, this research presents a valuable alternative to invasive or in vitro biological experiments. Traditional studies involving living neural tissue raise ethical concerns regarding animal use and the potential for suffering. By offering a synthetic yet biologically plausible model, this work reduces reliance on such methods while still enabling detailed exploration of neural dynamics. Furthermore, it opens new avenues for non-invasive experimentation in cognitive and clinical domains, aligning with ethical principles that emphasize minimizing harm and maximizing scientific benefit.