Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, July 11, 2025

Artificial intelligence in psychological practice: Applications, ethical considerations, and recommendations

Hutnyan, M., & Gottlieb, M. C. (2025).
Professional Psychology: Research and Practice.
Advance online publication.

Abstract

Artificial intelligence (AI) systems are increasingly relied upon in the delivery of health care services traditionally provided solely by humans, and the widespread use of AI in the routine practice of professional psychology is on the horizon. It is incumbent on practicing psychologists to be prepared to effectively implement AI technologies and engage in thoughtful discourse regarding the ethical and responsible development, implementation, and regulation of these technologies. This article provides a brief overview of what AI is and how it works, a description of its current and potential future applications in professional practice, and a discussion of the ethical implications of using AI systems in the delivery of psychological services. Applications of AI technologies in key areas of clinical practice are addressed, including assessment and intervention. Using the Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017) as a framework, anticipated ethical challenges across five domains—harm and nonmaleficence, autonomy and informed consent, fidelity and responsibility, privacy and confidentiality, and bias, respect, and justice—are discussed. Based on these challenges, provisional recommendations for psychologists are provided.

Impact Statement

This article provides an overview of artificial intelligence (AI) and how it works, describes current and developing applications of AI in the practice of professional psychology, and explores the potential ethical challenges of using these technologies in the delivery of psychological services. The use of AI in professional psychology has many potential benefits, but it also has drawbacks; ethical psychologists are wise to carefully consider their use of AI in practice.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.

Wednesday, July 9, 2025

Management of Suicidal Thoughts and Behaviors in Youth. Systematic Review

Sim L, Wang Z, et al (2025).
Prepared by the Mayo Clinic Evidence-based 
Practice Center under

Abstract

Background: Suicide is a leading cause of death in young people and an escalating public health crisis. We aimed to assess the effectiveness and harms of available treatments for suicidal thoughts and behaviors in youths at heightened risk for suicide. We also aimed to examine how social determinants of health, racism, disparities, care delivery methods, and patient demographics affect outcomes.

Methods: We conducted a systematic review and searched several databases including MEDLINE®, Embase®, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and others from January 2000 to September 2024. We included randomized clinical trials (RCTs), comparative observational studies, and before-after studies of psychosocial interventions, pharmacological interventions, neurotherapeutics, emerging therapies, and combinations therapies. Eligible patients were youths (aged 5 to 24 years) who had a heightened risk for suicide, including youths who have experienced suicidal ideation, prior attempts, hospital discharge for mental health treatment, or command hallucinations; were identified as high risk on validated questionnaires; or were from other at-risk groups. Pairs of independent reviewers selected and appraised studies. Findings were synthesized narratively.

Results: We included 65 studies reporting on 14,534 patients (33 RCTs, 13 comparative observational studies, and 19 before-after studies). Psychosocial interventions identified from the studies comprised psychotherapy interventions (33 studies, Cognitive Behavior Therapy, Dialectical Behavior Therapy, Collaborative Assessment and Management of Suicidality, Dynamic Deconstructive Psychotherapy, Attachment-Based Family Therapy, and Family-Focused Therapy), acute (i.e., 1 to 4 sessions/contacts) psychosocial interventions (19 studies, acute safety planning, family-based crisis management, motivational interviewing crisis interventions, continuity of care following crisis, and brief adjunctive treatments), and school/community-based psychosocial interventions (13 studies, social network interventions, school-based skills interventions, suicide awareness/gatekeeper programs, and community-based, culturally tailored adjunct programs). For most categories of psychotherapies (except DBT), acute interventions, or school/community-based interventions, there was insufficient strength of evidence and uncertainty about suicidal thoughts or attempts. None of the studies evaluated adverse events associated with the interventions. The evidence base on pharmacological treatment for suicidal youths was largely nonexistent at the present time. No eligible study evaluated neurotherapeutics or emerging therapies.

Conclusion: The current evidence on available interventions intended for youths at heightened risk of suicide is uncertain. Medication, neurotherapeutics, and emerging therapies remain unstudied in this population. Given that most treatments were adapted from adult protocols that may not fit the developmental and contextual experience of adolescents or younger children, this limited evidence base calls for the development of novel, developmentally and trauma-informed treatments, as well as multilevel interventions to address the rising suicide risk in youths.

Tuesday, July 8, 2025

Behavioral Ethics: Ethical Practice Is More Than Memorizing Compliance Codes

Cicero F. R. (2021).
Behavior analysis in practice, 14(4), 
1169–1178.

Abstract

Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.

Here are some thoughts:

This article argues that ethical practice requires more than memorizing compliance codes, as professionals aware of such codes still commit ethical breaches. Behavioral ethics suggests that environmental and situational variables often evoke and maintain unethical decisions, conceptualizing these decisions as operant behavior. Thus, knowledge of ethical codes alone is insufficient to prevent unethical practice; an assessment of environmental influences is necessary. The paper translates behavioral ethics constructs like self-serving bias, incrementalism, framing, obedience to authority, conformity bias, and overconfidence bias into behavior-analytic principles such as reinforcement, shaping, motivating operations, and stimulus control. This perspective shifts the focus from blaming individuals towards analyzing environmental factors that prompt ethical breaches, advocating for proactive assessment to support ethical behavior.

Understanding these concepts is vital for psychologists because they too are subject to environmental pressures that can lead to unethical actions, despite ethical training. The article highlights that ethical knowledge does not always translate to ethical behavior, emphasizing that situational factors often play a more significant role. Psychologists must recognize subtle influences such as the gradual normalization of unethical actions (incrementalism), the impact of how situations are described (framing), pressures from authority figures, and conformity to group norms, as these can all compromise ethical judgment. An overconfidence in one's own ethical standing can further obscure these influences. By applying a behavior-analytic lens, psychologists can better identify and mitigate these environmental risks, fostering a culture of proactive ethical assessment within their practice and institutions to safeguard clients and the profession.

Monday, July 7, 2025

Subconscious Suggestion

Ferketic, M. (2025, Forthcoming)  

Abstract

Subconscious suggestion is a silent but pervasive force shaping perception, decision-making, and attentional structuring beneath awareness. Operating as internal impressive action, it passively introduces impulses, biases, and associative framings into consciousness, subtly guiding behavior without volitional approval. Like hypnotic suggestion, it does not dictate action; it attempts to compel through motivational pull, influencing perception and intent through saliency and potency gradients. Unlike previous theories that depict subconscious influence as abstract or deterministic, this work presents a novel structured, mechanistic, operational model of function, demonstrating from first principles how subconscious suggestion disperses influence into awareness, interacts with attentional deployment, and negotiates attentional sovereignty. Additionally, it frames free will not as exemption from subconscious force, but as mastery of its regulation, with autonomy emerging from the ability to recognize, refine, and command suggestive forces rather than be unconsciously governed by them.

Here are some thoughts:

Subconscious suggestion, as detailed in the article, is a fundamental cognitive mechanism that shapes perception, attention, and behavior beneath conscious awareness. It operates as internal impressive action—passively introducing impulses, biases, and associative framings into consciousness, subtly guiding decisions without direct volitional control. Unlike deterministic models of unconscious influence, this framework presents subconscious suggestion as a structured, mechanistic process that competes for attention through saliency and motivational potency gradients. It functions much like a silent internal hypnotist, not dictating action but attempting to compel through perceptual framing and emotional nudges.

For practicing psychologists, understanding this model is crucial—it provides insight into how automatic cognitive processes contribute to habit formation, emotional regulation, motivation, and decision-making. It reframes free will not as exemption from subconscious forces, but as mastery over them, emphasizing the importance of attentional sovereignty and volitional override in clinical interventions. This knowledge equips psychologists to better identify, assess, and guide clients in managing subconscious influences, enhancing therapeutic outcomes across conditions such as addiction, anxiety, compulsive behaviors, and maladaptive thought patterns.

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.

Saturday, July 5, 2025

Bias Is Not Color Blind: Ignoring Gender and Race Leads to Suboptimal Selection Decisions.

Rabinovitch, H. et al. (2025, May 27).

Abstract

Blindfolding—selecting candidates based on objective selection tests while avoiding personal information about their race and gender— is commonly used to mitigate bias in selection. Selection tests, however, often benefit people of a certain race or gender. In such cases, selecting the best candidates requires incorporating, rather than ignoring, the biasing factor. We examined people's preference for avoiding candidates’ race and gender, even when fully aware that these factors bias the selection test. We put forward a novel prediction suggesting that paradoxically, due to their fear of appearing partial, people would choose not to reveal race and gender information, even when doing so means making suboptimal decisions. Across three experiments (N = 3,621), hiring professionals (and laypeople) were tasked with selecting the best candidate for a position when they could reveal the candidate’s race and gender or avoid it. We further measured how fear for their social image corresponds with their decision, as well as how job applicants perceive such actions. The results supported our predictions, showing that more than 50% did not reveal gender and race information, compared to only 30% who did not reveal situational biasing information, such as the time of day in which the interview was held. Those who did not reveal information expressed higher concerns for their social and self-image than those who decided to reveal. We conclude that decision-makers avoid personal biasing information to maintain a positive image, yet by doing so, they compromise fairness and accuracy alike.

Public significance statements

Blindfolding—ignoring one’s gender and race in selection processes—is a widespread strategy aimed at reducing bias and increasing diversity. Selection tests, however, often unjustly benefit members of certain groups, such as men and white people. In such cases, correcting the bias requires incorporating, rather than ignoring, information about the candidates’ gender and race. The current research shows that decision-makers are reluctant to reveal such information due to their fear of appearing partial. Paradoxically, decision-makers avoid such information, even when fully aware that doing so may perpetuate bias, in order to protect their social image as impartial, but miss out on the opportunity to advance fairness and choose the best candidates.

Here are some thoughts:

This research is critically important to practicing psychologists because it sheds light on the complex interplay between bias, decision-making, and social image concerns in hiring processes. The study demonstrates how well-intentioned practices like "blindfolding"—omitting race or gender information to reduce discrimination—can paradoxically perpetuate systemic biases when selection tools themselves are flawed. Practicing psychologists must understand that ignoring personal attributes does not eliminate bias but can instead obscure its effects, leading to suboptimal and unfair outcomes. By revealing how decision-makers avoid sensitive information out of fear of appearing partial, the research highlights the psychological mechanisms—such as social and self-image concerns—that drive this avoidance. This insight is crucial for psychologists involved in organizational consulting, personnel training, or policy development, as it underscores the need for more nuanced strategies that address bias directly rather than avoiding it.

Additionally, the findings inform interventions aimed at promoting diversity, equity, and inclusion by showing that transparency and informed adjustments based on demographic factors may be necessary to achieve fairer outcomes. Ultimately, the research challenges traditional assumptions about neutrality in selection decisions and urges psychologists to advocate for evidence-based approaches that actively correct for bias while considering the broader implications of perceived fairness and merit.

Friday, July 4, 2025

The Psychology of Moral Conviction

Skitka, L. J.,  et al. (2020).
Annual Review of Psychology, 72(1),
347–366.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

Here are some thoughts:

The article provides valuable insights into how individuals perceive and process attitudes grounded in fundamental beliefs about right and wrong. It distinguishes morally convicted attitudes from other constructs, such as strong but nonmoral attitudes or religious beliefs, by highlighting that moral convictions are viewed as universally and objectively true and are relatively resistant to authority or peer influence. These convictions often lead to significant social and political consequences, including intolerance of differing views, resistance to compromise, increased political engagement, and heightened emotional responses. The article also explores the processes of attitude moralization—how an issue becomes infused with moral significance—and demoralization, offering a domain theory of attitude moralization that suggests different pathways depending on whether the initial attitude is perceived as a preference, convention, or existing moral imperative.

This knowledge is critically important to practicing psychologists because it enhances their understanding of how moral convictions shape behavior, decision-making, and interpersonal dynamics. For instance, therapists working with clients on issues involving conflict resolution, values clarification, or behavioral change must consider the role of moral conviction in shaping resistance to persuasion or difficulty in compromising. Understanding moral conviction can also aid psychologists in navigating cultural differences, addressing polarization in group settings, and promoting tolerance by recognizing how individuals intuitively perceive certain issues as moral. Furthermore, as society grapples with increasingly divisive sociopolitical challenges—such as climate change, immigration, and public health crises—psychologists can use these insights to foster dialogue, reduce moral entrenchment, and encourage constructive engagement. Ultimately, integrating the psychology of moral conviction into practice allows for more nuanced, empathetic, and effective interventions across clinical, organizational, and community contexts.

Thursday, July 3, 2025

Mindfulness, moral reasoning and responsibility: towards virtue in Ethical Decision-Making.

Small, C., & Lew, C. (2019).
Journal of Business Ethics, 169(1),
103–117.

Abstract

Ethical decision-making is a multi-faceted phenomenon, and our understanding of ethics rests on diverse perspectives. While considering how leaders ought to act, scholars have created integrated models of moral reasoning processes that encompass diverse influences on ethical choice. With this, there has been a call to continually develop an understanding of the micro-level factors that determine moral decisions. Both rationalist, such as moral processing, and non-rationalist factors, such as virtue and humanity, shape ethical decision-making. Focusing on the role of moral judgement and moral intent in moral reasoning, this study asks what bearings a trait of mindfulness and a sense of moral responsibility may have on this process. A survey measuring mindfulness, moral responsibility and moral judgement completed by 171 respondents was used for four hypotheses on moral judgement and intent in relation to moral responsibility and mindfulness. The results indicate that mindfulness predict moral responsibility but not moral judgement. Moral responsibility does not predict moral judgement, but moral judgement predicts moral intent. The findings give further insight into the outcomes of mindfulness and expand insights into the models of ethical decision-making. We offer suggestions for further research on the role of mindfulness and moral responsibility in ethical decision-making.

Here are some thoughts:

This research explores the interplay between mindfulness, moral reasoning, and moral responsibility in ethical decision-making. Drawing on Rest’s model of moral reasoning—which outlines four phases (awareness, judgment, intent, and behavior)—the study investigates how mindfulness as a virtue influences these stages, particularly moral judgment and intent, and how it relates to a sense of moral responsibility. Regression analyses revealed that while mindfulness did not directly predict moral judgment, it significantly predicted moral responsibility. Additionally, moral judgment was found to strongly predict moral intent.

For practicing psychologists, this study is important for several reasons. First, it highlights the potential role of mindfulness as a trait linked to moral responsibility, suggesting that cultivating mindfulness may enhance ethical decision-making by fostering a greater sense of accountability toward others. This has implications for ethics training and professional development in psychology, especially in fields where practitioners face complex moral dilemmas. Second, the findings underscore the importance of integrating non-rationalist factors—such as virtues and emotional awareness—into traditional models of moral reasoning, offering a more holistic understanding of ethical behavior. Third, the research supports the use of scenario-based approaches in training professionals to navigate real-world ethical challenges, emphasizing the contextual nature of moral reasoning. Finally, the paper contributes to the broader literature on mindfulness by linking it to prosocial behaviors and ethical outcomes, which can inform therapeutic practices aimed at enhancing clients’ moral self-awareness and responsible decision-making.

Wednesday, July 2, 2025

Realization of Empathy Capability for the Evolution of Artificial Intelligence Using an MXene(Ti3C2)-Based Memristor

Wang, Y., Zhang, Y., et al. (2024).
Electronics, 13(9), 1632.

Abstract

Empathy is the emotional capacity to feel and understand the emotions experienced by other human beings from within their frame of reference. As a unique psychological faculty, empathy is an important source of motivation to behave altruistically and cooperatively. Although human-like emotion should be a critical component in the construction of artificial intelligence (AI), the discovery of emotional elements such as empathy is subject to complexity and uncertainty. In this work, we demonstrated an interesting electrical device (i.e., an MXene (Ti3C2) memristor) and successfully exploited the device to emulate a psychological model of “empathic blame”. To emulate this affective reaction, MXene was introduced into memristive devices because of its interesting structure and ionic capacity. Additionally, depending on several rehearsal repetitions, self-adaptive characteristic of the memristive weights corresponded to different levels of empathy. Moreover, an artificial neural system was designed to analogously realize a moral judgment with empathy. This work may indicate a breakthrough in making cool machines manifest real voltage-motivated feelings at the level of the hardware rather than the algorithm.

Here are some thoughts:

This research represents a critical step toward endowing machines with human-like emotional capabilities, particularly empathy. Traditionally, AI has been limited to algorithmic decision-making and pattern recognition, lacking the nuanced ability to understand or simulate human emotions. By using an MXene-based memristor to emulate "empathic blame," researchers have demonstrated a hardware-level mechanism that mimics how humans adjust their moral judgments based on repeated exposure to similar situations—an essential component of empathetic reasoning. This breakthrough suggests that future AI systems could be designed not just to recognize emotions but to adaptively respond to them in real time, potentially leading to more socially intelligent machines.

For psychologists, this research raises profound questions about the nature of empathy, its role in moral judgment, and whether artificially created systems can truly embody these traits or merely imitate them. The ability to program empathy into AI could change how we conceptualize machine sentience and emotional intelligence, blurring the lines between biological and artificial cognition. Furthermore, as AI becomes more integrated into social, therapeutic, and even judicial contexts, understanding how machines might "feel" or interpret human suffering becomes increasingly relevant. The study also opens up new interdisciplinary dialogues between neuroscience, ethics, and AI development, emphasizing the importance of considering psychological principles in the design of emotionally responsive technologies. Ultimately, this work signals a shift from purely functional AI toward systems capable of engaging with humans on a deeper, more emotionally resonant level.

Tuesday, July 1, 2025

The Advantages of Human Evolution in Psychotherapy: Adaptation, Empathy, and Complexity

Gavazzi, J. (2025, May 24).
On Board with Professional Psychology.
American Board of Professional Psychology.
Issues 5.

Abstract

The rapid advancement of artificial intelligence, particularly Large Language Models (LLMs), has generated significant concern among psychologists regarding potential impacts on therapeutic practice. 

This paper examines the evolutionary advantages that position human psychologists as irreplaceable in psychotherapy, despite technological advances. Human evolution has produced sophisticated capacities for genuine empathy, social connection, and adaptive flexibility that are fundamental to effective therapeutic relationships. These evolutionarily-derived abilities include biologically-rooted emotional understanding, authentic empathetic responses, and the capacity for nuanced, context-dependent decision-making. In contrast, LLMs lack consciousness, genuine emotional experience, and the evolutionary framework necessary for deep therapeutic insight. While LLMs can simulate empathetic responses through linguistic patterns, they operate as statistical models without true emotional comprehension or theory of mind. The therapeutic alliance, cornerstone of successful psychotherapy, depends on authentic human connection and shared experiential understanding that transcends algorithmic processes. Human psychologists demonstrate adaptive complexity in understanding attachment styles, trauma responses, and individual patient needs that current AI cannot replicate.

The paper concludes that while LLMs serve valuable supportive roles in documentation, treatment planning, and professional reflection, they cannot replace the uniquely human relational and interpretive aspects essential to psychotherapy. Psychologists should integrate these technologies as resources while maintaining focus on the evolutionarily-grounded human capacities that define effective therapeutic practice.

Monday, June 30, 2025

Neural Processes Linking Interoception to Moral Preferences Aligned with Group Consensus

Kim, J., & Kim, H. (2025).
Journal of Neuroscience, e1114242025.

Abstract

Aligning one’s decisions with the prevailing norms and expectations of those around us constitutes a fundamental facet of moral decision-making. When faced with conflicting moral values, one adaptive approach is to rely on intuitive moral preference. While there has been theoretical speculation about the connection between moral preference and an individual’s awareness of introspective interoceptive signals, it has not been empirically examined. This study examines the relationships between individuals’ preferences in moral dilemmas and interoception, measured with self-report, heartbeat detection task, and resting-state fMRI. Two independent experiments demonstrate that both male and female participants’ interoceptive awareness and accuracy are associated with their moral preferences aligned with group consensus. In addition, the fractional occupancies of the brain states involving the ventromedial prefrontal cortex and the precuneus during rest mediate the link between interoceptive awareness and the degree of moral preferences aligned to group consensus. These findings provide empirical evidence of the neural mechanism underlying the link between interoception and moral preferences aligned with group consensus.

Significance statement

We investigate the intricate link between interoceptive ability to perceive internal bodily signals and decision-making when faced with moral dilemmas. Our findings reveal a significant correlation between the accuracy and awareness of interoceptive signals and the degree of moral preferences aligned with group consensus. Additionally, brain states involving the ventromedial prefrontal cortex and precuneus during rest mediate the link between interoceptive awareness and moral preferences aligned with group consensus. These findings provide empirical evidence that internal bodily signals play a critical role in shaping our moral intuitions according to others’ expectations across various social contexts.

Here are some thoughts:

A recent study highlighted that our moral decisions may be influenced by our body's internal signals, particularly our heartbeat. Researchers found that individuals who could accurately perceive their own heartbeats tended to make moral choices aligning with the majority, regardless of whether those choices were utilitarian or deontological. This implies that bodily awareness might unconsciously guide us toward socially accepted norms. Brain scans supported this, showing increased activity in areas associated with evaluation and judgment, like the medial prefrontal cortex, in those more attuned to their internal signals. While the study's participants were exclusively Korean college students, limiting generalizability, the findings open up intriguing possibilities about the interplay between bodily awareness and moral decision-making.

Sunday, June 29, 2025

Whistle-blowers – morally courageous actors in health care?

Wiisak, J., Suhonen, R., & Leino-Kilpi, H. (2022).
Nursing Ethics, 29(6), 1415–1429.

Abstract
Background

Moral courage means courage to act according to individual’s own ethical values and principles despite the risk of negative consequences for them. Research about the moral courage of whistle-blowers in health care is scarce, although whistleblowing involves a significant risk for the whistle-blower.

Objective
To analyse the moral courage of potential whistle-blowers and its association with their background variables in health care.

Research design
Was a descriptive-correlational study using a questionnaire, containing Nurses Moral Courage Scale©, a video vignette of the wrongdoing situation with an open question about the vignette, and several background variables. Data were analysed statistically and inductive content analysis was used for the narratives.

Participants and research context
Nurses as healthcare professionals (including registered nurses, public health nurses, midwives, and nurse paramedics) were recruited from the membership register of the Nurses’ Association via email in 2019. A total of 454 nurses responded. The research context was simulated using a vignette.

Ethical considerations
Good scientific inquiry guidelines were followed. Permission to use the Nurses’ Moral Courage Scale© was obtained from the copyright holder. The ethical approval and permission to conduct the study were obtained from the participating university and the Nurses’ Association.

Findings
The mean value of potential whistle-blowers’ moral courage on a Visual Analogue Scale (0–10) was 8.55 and the mean score was 4.34 on a 5-point Likert scale. Potential whistle-blowers’ moral courage was associated with their socio-demographics, education, work, personality and social responsibility related background variables.

Discussion and conclusion
In health care, potential whistle-blowers seem to be quite morally courageous actors. The results offer opportunities for developing interventions, practices and education to support and encourage healthcare professionals in their whistleblowing. Research is needed for developing a theoretical construction to eventually increase whistleblowing and decrease and prevent wrongdoing.

Here are some thoughts:

This study investigates the moral courage of healthcare professionals in whistleblowing scenarios. Utilizing a descriptive-correlational design, the researchers surveyed 454 nurses—including registered nurses, public health nurses, midwives, and nurse paramedics—using the Nurses' Moral Courage Scale, a video vignette depicting a wrongdoing situation, and open-ended questions. Findings revealed a high level of moral courage among participants, with an average score of 8.55 on a 0–10 Visual Analogue Scale and 4.34 on a 5-point Likert scale. The study identified associations between moral courage and various background factors such as socio-demographics, education, work experience, personality traits, and social responsibility. The authors suggest that these insights can inform the development of interventions and educational programs to support and encourage whistleblowing in healthcare settings, ultimately aiming to reduce and prevent unethical practices

Saturday, June 28, 2025

An Update on Psychotherapy for the Treatment of PTSD

Rothbaum, B. O., & Watkins, L. E. (2025).
American Journal of Psychiatry, 182(5), 424–437.

Abstract

Posttraumatic stress disorder (PTSD) symptoms are part of the normal response to trauma. Most trauma survivors will recover over time without intervention, but a significant minority will develop chronic PTSD, which is unlikely to remit without intervention. Currently, only two medications, sertraline and paroxetine, are approved by the U.S. Food and Drug Administration to treat PTSD, and the combination of brexpiprazole and sertraline and MDMA-assisted therapy have FDA applications pending. These medications, and the combination of pharmacotherapy and psychotherapy, are not recommended as first-line treatments in any published PTSD treatment guidelines. The only interventions recommended as first-line treatments are trauma-focused psychotherapies; the U.S. Department of Veterans Affairs/Department of Defense PTSD treatment guideline recommends prolonged exposure (PE), cognitive processing therapy (CPT), and eye movement desensitization and reprocessing, and the American Psychological Association PTSD treatment guideline recommends PE, CPT, cognitive therapy, and trauma-focused cognitive-behavioral therapy. Although published clinical trials of psychedelic-assisted psychotherapy have not incorporated evidence-based PTSD psychotherapies, they have achieved greater response rates than other trials of combination treatment, and there is some enthusiasm about combining psychedelic medications with evidence-based psychotherapies. The state-of-the-art PTSD psychotherapies are briefly reviewed here, including their effects on clinical and neurobiological measures.

The article is paywalled, unfortuantely.

Here is a summary and some thoughts.

In the evolving landscape of PTSD treatment, Rothbaum and Watkins reaffirm a crucial truth: trauma-focused psychotherapies remain the first-line, evidence-based interventions for posttraumatic stress disorder (PTSD), outperforming pharmacological approaches in both efficacy and durability.

The State of PTSD Treatment
While most individuals naturally recover from trauma, a significant minority develop chronic PTSD, which typically requires intervention. Current FDA-approved medications for PTSD—sertraline and paroxetine—offer only modest relief, and recent psychedelic-assisted therapy trials, though promising, have not yet integrated evidence-based psychotherapy approaches. As such, expert guidelines consistently recommend trauma-focused psychotherapies as first-line treatments.

Evidence-Based Therapies at the Core
The VA/DoD and APA guidelines converge on recommending prolonged exposure (PE) and cognitive processing therapy (CPT), with eye movement desensitization and reprocessing (EMDR), cognitive therapy, and trauma-focused CBT also strongly supported.

PE helps patients systematically confront trauma memories and triggers to promote extinction learning. Its efficacy is unmatched, with robust support from meta-analyses and neurobiological studies.

CPT targets maladaptive beliefs that develop after trauma, helping patients reframe distorted thoughts through cognitive restructuring.

EMDR, though somewhat controversial, remains a guideline-supported approach and continues to show effectiveness in trials.

Neurobiological Insights
Modern neuroscience supports these therapies: PTSD involves hyperactivation of fear and salience networks (e.g., amygdala) and underactivation of emotion regulation circuits (e.g., prefrontal cortex). Successful treatment—especially exposure-based therapy—enhances extinction learning and improves functional connectivity in these circuits. Moreover, cortisol patterns, genetic markers, and cardiovascular reactivity are emerging as potential predictors of treatment response.

Innovations and Expansions
Therapists are increasingly utilizing massed formats (e.g., daily sessions over 2 weeks), virtual reality exposure therapy, and early interventions in emergency settings. These models show high completion rates and comparable outcomes to traditional weekly formats.

One particularly innovative direction involves MDMA-assisted psychotherapy. Although still investigational, trials show higher remission rates when MDMA is paired with psychotherapy. The METEMP protocol (MDMA-enhanced PE) offers a translational model that integrates the strengths of both approaches.

Addressing Clinical Challenges
High dropout rates (27–50%) remain a concern, largely due to avoidance—a core PTSD symptom. Massed therapy formats have demonstrated improved retention. Additionally, comorbid conditions (e.g., depression, TBI, substance use) generally do not impede response to trauma-focused care and can be concurrently treated using integrated protocols like COPE (Concurrent Treatment of PTSD and Substance Use Disorders Using PE).

Toward Greater Access and Remission
Despite strong evidence, access to high-quality trauma-focused therapy remains limited outside military and VA systems. Telehealth, stepped care models, and broader dissemination of evidence-based practices are key to closing this gap.

Finally, Rothbaum and Watkins argue that remission—not just symptom reduction—must be the treatment goal. With renewed scientific rigor and integrative innovations like MDMA augmentation, the field is inching closer to more effective and enduring treatments.

Friday, June 27, 2025

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

Kosmyna, N. K. et al. (2025).

Abstract

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.


Here are some thoughts:

This research is important for psychologists because it provides empirical evidence on how using large language models (LLMs) like ChatGPT, traditional search engines, or relying solely on one’s own cognition affects cognitive engagement, neural connectivity, and perceived ownership during essay writing tasks. The study used EEG to measure brain activity and found that participants who wrote essays unaided (Brain-only group) exhibited the highest neural connectivity and cognitive engagement, while those using LLMs showed the weakest. Notably, repeated LLM use led to reduced memory recall, lower perceived ownership of written work, and diminished ability to quote from their own essays, suggesting a measurable cognitive cost and potential decrease in learning skills. The findings highlight that while LLMs can provide immediate benefits, their use may undermine deeper learning and engagement, which has significant implications for educational practices and the integration of AI tools in learning environments.

Thursday, June 26, 2025

A Modular Spiking Neural Network-Based Neuro-Robotic System for Exploring Embodied Intelligence

Chen, Z., Sun, T., et al. (2024). 
2022 International Conference on
Advanced Robotics and Mechatronics (ICARM)
1093–1098.

Abstract

Bio-inspired construction of modular biological neural networks (BNNs) is gaining attention due to their innate stable inter-modular signal transmission ability, which is thought to underlying the emergence of biological intelligence. However, the complicated, laborious fabrication of BNNs with structural and functional connectivity of interest in vitro limits the further exploration of embodied intelligence. In this work, we propose a modular spiking neural network (SNN)-based neuro-robotic system by concurrently running SNN modeling and robot simulation. We show that the modeled mSNNs present complex calcium dynamics resembling mBNNs. In particular, spontaneous periodic network-wide bursts were observed in the mSNN, which could be further suppressed partially or completely with global chemical modulation. Moreover, we demonstrate that after complete suppression, intermodular signal transmission can still be evoked reliably via local stimulation. Therefore, the modeled mSNNs could either achieve reliable trans-modular signal transmission or add adjustable false-positive noise signals (spontaneous bursts). By interconnecting the modeled mSNNs with the simulated mobile robot, active obstacle avoidance and target tracking can be achieved. We further show that spontaneous noise impairs robot performance, which indicates the importance of suppressing spontaneous burst activities of modular networks for the reliable execution of robot tasks. The proposed neuro-robotic system embodies spiking neural networks with a mobile robot to interact with the external world, which paves the way for exploring the arising of more complex biological intelligence.

Here are some thoughts:

This paper is pretty wild. These researchers wanted to create an AI that simulates human brain activity embodied within a simulated mobile robot. The AI simulates calcium spiking in the brain, and the AI modules apparently communicate with each other. Quieting the spiking made the AI simulated robotic system more efficient. Here are some thoughts:

Cognitive neuroscience seeks to uncover how neural activity gives rise to perception, decision-making, and behavior, often by studying the dynamics of brain networks. This research contributes significantly to that goal by modeling modular spiking neural networks (mSNNs) that replicate key features of biological neural networks, including spontaneous network bursts and inter-modular communication. These modeled networks demonstrate how structured neural activity can support reliable signal transmission, a fundamental aspect of cognitive processing. Importantly, they also allow for controlled manipulation of network states—such as through global chemical modulation—which provides a way to study how noise or spontaneous activity affects information processing.

From an ethical standpoint, this research presents a valuable alternative to invasive or in vitro biological experiments. Traditional studies involving living neural tissue raise ethical concerns regarding animal use and the potential for suffering. By offering a synthetic yet biologically plausible model, this work reduces reliance on such methods while still enabling detailed exploration of neural dynamics. Furthermore, it opens new avenues for non-invasive experimentation in cognitive and clinical domains, aligning with ethical principles that emphasize minimizing harm and maximizing scientific benefit.

Wednesday, June 25, 2025

Neuron–astrocyte associative memory

Kozachkov, L., Slotine, J., & Krotov, D. (2025).
Proceedings of the National Academy of Sciences, 
122(21).

Abstract

Astrocytes, the most abundant type of glial cell, play a fundamental role in memory. Despite most hippocampal synapses being contacted by an astrocyte, there are no current theories that explain how neurons, synapses, and astrocytes might collectively contribute to memory function. We demonstrate that fundamental aspects of astrocyte morphology and physiology naturally lead to a dynamic, high-capacity associative memory system. The neuron–astrocyte networks generated by our framework are closely related to popular machine learning architectures known as Dense Associative Memories. Adjusting the connectivity pattern, the model developed here leads to a family of associative memory networks that includes a Dense Associative Memory and a Transformer as two limiting cases. In the known biological implementations of Dense Associative Memories, the ratio of stored memories to the number of neurons remains constant, despite the growth of the network size. Our work demonstrates that neuron–astrocyte networks follow a superior memory scaling law, outperforming known biological implementations of Dense Associative Memory. Our model suggests an exciting and previously unnoticed possibility that memories could be stored, at least in part, within the network of astrocyte processes rather than solely in the synaptic weights between neurons.

Significance

Recent experiments have challenged the belief that glial cells, which compose at least half of brain cells, are just passive support structures. Despite this, a clear understanding of how neurons and glia work together for brain function is missing. To close this gap, we present a theory of neuron–astrocytes networks for memory processing, using the Dense Associative Memory framework. Our findings suggest that astrocytes can serve as natural units for implementing this network in biological “hardware.” Astrocytes enhance the memory capacity of the network. This boost originates from storing memories in the network of astrocytic processes, not just in synapses, as commonly believed. These process-to-process communications likely occur in the brain and could help explain its impressive memory processing capabilities.

Here are some thoughts:

This research represents a paradigm shift in our understanding of memory formation and storage. The paper examines how "astrocytes, the most abundant type of glial cell, play a fundamental role in memory" and notes that "most hippocampal synapses being contacted by an astrocyte."

For psychologists, this is revolutionary because it challenges the traditional neuron-centric view of memory. Previously, memory research focused almost exclusively on neuronal connections and synaptic plasticity. This study demonstrates that astrocytes - previously thought to be merely supportive cells - are active participants in memory processes. This has profound implications for:

Cognitive Psychology: It suggests memory formation involves a more complex cellular network than previously understood, potentially explaining individual differences in memory capacity and the mechanisms behind memory consolidation.

Learning Theory: The findings may require updating models of how associative learning occurs at the cellular level, moving beyond simple neuronal networks to include glial participation.

Memory Disorders: Understanding astrocyte involvement opens new avenues for researching conditions like Alzheimer's disease, where both neuronal and glial dysfunction occur.

Significance for Psychopharmacology

This research has transformative implications for drug development and treatment approaches:

Novel Drug Targets: If astrocytes are crucial for memory, pharmaceutical interventions could target astrocytic functions rather than focusing solely on neuronal receptors. This could lead to entirely new classes of cognitive enhancers or treatments for memory disorders.

Mechanism of Action: Many psychoactive drugs may work partially through astrocytic pathways that weren't previously recognized. This could explain why some medications have effects that aren't fully accounted for by their known neuronal targets.

Treatment Resistance: Some patients who don't respond to traditional neurotropic medications might benefit from drugs that target the astrocyte-neuron memory system.

Precision Medicine: Understanding the dual neuron-astrocyte system could help explain why individuals respond differently to the same medications, leading to more personalized treatment approaches.

This research fundamentally expands our understanding of the biological basis of memory beyond neurons to include the brain's most abundant cell type, potentially revolutionizing both theoretical frameworks in psychology and therapeutic approaches in psychopharmacology.

Tuesday, June 24, 2025

Why Do More Police Officers Die by Suicide Than in the Line of Duty?

Jaime Thompson
The New York Times
Originally published 8 May 25

Here is an excerpt:

American policing has paid much attention to the dangers faced in the line of duty, from shootouts to ambushes, but it has long neglected a greater threat to officers: themselves. More cops kill themselves every year than are killed by suspects. At least 184 public-safety officers die by suicide each year, according to First H.E.L.P., a nonprofit that has been collecting data on police suicide since 2016. An average of about 57 officers are killed by suspects every year, according to statistics from the Federal Bureau of Investigation. After analyzing data on death certificates, Dr. John Violanti, a research professor at the University at Buffalo, concluded that law-enforcement officers are 54 percent more likely to die by suicide than the average American worker. A lack of good data, however, has thwarted researchers, who have struggled to reach consensus on the problem’s scope. Recognizing the problem, Congress passed a law in 2020 requiring the F.B.I. to collect data on police suicide, but reporting remains voluntary.

“Suicide is something you just didn’t talk about in law enforcement,” says Chuck Wexler, the executive director of the Police Executive Research Forum (PERF). “It was shameful. It was weakness.” But a growing body of research has shown how chronic exposure to stress and trauma can impact the brain, causing impaired thinking, poor decision-making, a lack of empathy and difficulty distinguishing between real and perceived threats. Those were the very defects on display in the high-profile videos of police misconduct that looped across the country leading up to the killing of George Floyd by an officer in 2020. National outrage and widespread protests against the police were experienced as further stress by a force that already was, by many metrics, mentally and physically unwell. PERF now calls police suicide the “No. 1 officer-safety issue.”


Here are some thoughts:

Police officers unfortunately face a significantly elevated risk of suicide compared to the general population, a grim reality that tragically surpasses even the dangers they encounter in the line of duty. This heightened risk is often attributed to the cumulative impact of repeated exposure to traumatic events, which can lead to the development of mental health challenges such as post-traumatic stress disorder (PTSD), depression, and anxiety. Sadly, some officers may turn to substance abuse as a way to cope with these intense emotional burdens, which can further compound their difficulties. Research indicates that the rates of depression among law enforcement officers are nearly twice that of the general public, highlighting the profound psychological toll of their profession. Compounding this issue is the cultural environment within law enforcement, which can often discourage officers from seeking help for mental health concerns due to the prevailing stigma and fears of being perceived as weak or unfit for their duties. Consequently, there is a pressing need for the development and implementation of readily accessible and confidential mental health resources specifically designed to meet the unique needs of the law enforcement community. These resources should include peer support programs and trauma-informed care approaches to foster a culture of well-being and encourage officers to seek the support they deserve.

Monday, June 23, 2025

Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation

Tierney, A. A.,  et al. (2024).
NEJM Catalyst, 5(3).

Abstract

Clinical documentation in the electronic health record (EHR) has become increasingly burdensome for physicians and is a major driver of clinician burnout and dissatisfaction. Time dedicated to clerical activities and data entry during patient encounters also negatively affects the patient–physician relationship by hampering effective and empathetic communication and care. Ambient artificial intelligence (AI) scribes, which use machine learning applied to conversations to facilitate scribe-like capabilities in real time, has great potential to reduce documentation burden, enhance physician–patient encounters, and augment clinicians’ capabilities. The technology leverages a smartphone microphone to transcribe encounters as they occur but does not retain audio recordings. To address the urgent and growing burden of data entry, in October 2023, The Permanente Medical Group (TPMG) enabled ambient AI technology for 10,000 physicians and staff to augment their clinical capabilities across diverse settings and specialties. The implementation process leveraged TPMG’s extensive experience in large-scale technology instantiation and integration incorporating multiple training formats, at-the-elbow peer support, patient-facing materials, rapid-cycle upgrades with the technology vendor, and ongoing monitoring. In 10 weeks since implementation, the ambient AI tool has been used by 3,442 TPMG physicians to assist in as many as 303,266 patient encounters across a wide array of medical specialties and locations. In total, 968 physicians have enabled ambient AI scribes in ≥100 patient encounters, with one physician having enabled it to assist in 1,210 encounters. The response from physicians who have used the ambient AI scribe service has been favorable; they cite the technology’s capability to facilitate more personal, meaningful, and effective patient interactions and to reduce the burden of after-hours clerical work. In addition, early assessments of patient feedback have been positive, with some describing improved interaction with their physicians. Early evaluation metrics, based on an existing tool that evaluates the quality of human-generated scribe notes, find that ambient AI use produces high-quality clinical documentation for physicians’ editing. Further statistical analyses after AI scribe implementation also find that usage is linked with reduced time spent in documentation and in the EHR. Ongoing enhancements of the technology are needed and are focused on direct EHR integration, improved capabilities for incorporating medical interpretation, and enhanced workflow personalization options for individual users. Despite this technology’s early promise, careful and ongoing attention must be paid to ensure that the technology supports clinicians while also optimizing ambient AI scribe output for accuracy, relevance, and alignment in the physician–patient relationship.

Key Takeaways

• Ambient artificial intelligence (AI) scribes show early promise in reducing clinicians’ burden, with a regional pilot noting a reduction in the amount of time spent constructing notes among users.

• Ambient AI scribes were found to be acceptable among clinicians and patients, largely improving the experience of both parties, with some physicians noting the transformational nature of the technology on their care.

• Although a review of 35 AI-generated transcripts resulted in an average score of 48 of 50 in 10 key domains, AI scribes are not a replacement for clinicians. They can produce inconsistencies that require physicians’ review and editing to ensure that they remain aligned with the physician–patient relationship.

• Given the incredible pace of change, building a dynamic evaluation framework is essential to assess the performance of AI scribes across domains including engagement, effectiveness, quality, and safety.

Sunday, June 22, 2025

This article won’t change your mind. Here’s why

Lubrano, S. S. (2025, May 18).
The Guardian.

Here is an excerpt:

There are lots of reasons why debate (and indeed, information-giving and argumentation in general) tends to be ineffective at changing people’s political beliefs. Cognitive dissonance, a phenomenon I studied as part of my PhD research, is one. This is the often unconscious psychological discomfort we feel when faced with contradictions in our own beliefs or actions, and it has been well documented. We can see cognitive dissonance and its effects at work when people rapidly “reason” in ways that are really attempts to mitigate their discomfort with new information about strongly held beliefs. For example, before Trump was convicted of various charges in 2024, only 17% of Republican voters believed felons should be able to be president; directly after his conviction, that number rose to 58%. To reconcile two contradictory beliefs (that presidents shouldn’t do x, and that Trump should be president), an enormous number of Republican voters simply changed their mind about the former. In fact, Republican voters shifted their views on more or less all the things Trump had been convicted of: fewer felt it was immoral to have sex with a porn star, pay someone to stay silent about an affair, or falsify a business record. Nor is this effect limited to Trump voters: research suggests we all rationalise in this way, in order to hold on to the beliefs that let us keep operating as we have been. Or, ironically, to change some of our beliefs in response to new information, but often only in order to not have to sacrifice other strongly held beliefs.

But it’s not just psychological phenomena like cognitive dissonance that make debates and arguments relatively ineffective. As I lay out in my book, probably the most important reason words don’t change minds is that two other factors carry far more influence: our social relationships; and our own actions and experiences.

Here are some thoughts:

The article discusses how people often resist changing their minds, even when presented with strong evidence, due to the psychological and social costs involved. It explains that beliefs are deeply tied to personal identity and social relationships, making individuals reluctant to alter them to avoid feelings of inconsistency or social rejection. The psychological mechanism at play is cognitive dissonance, where holding contradictory beliefs causes discomfort, leading people to reject new information that conflicts with their existing views. Additionally, motivated reasoning drives individuals to interpret evidence in a way that aligns with their preexisting beliefs to maintain emotional and social harmony. The article suggests that fostering open, non-confrontational discussions and emphasizing shared values can help reduce resistance to changing one’s mind, as it lessens the perceived threat to identity and social bonds.

Persuading people is a lot like psychotherapy because both require creating a safe, non-judgmental space where individuals can explore conflicting beliefs without feeling defensive, allowing change to emerge from within rather than through forceful confrontation.

Saturday, June 21, 2025

A Framework for Language Technologies in Behavioral Research and Clinical Applications: Ethical Challenges, Implications, and Solutions

Diaz-Asper, C., Hauglid, M. K., et al. (2024).
American Psychologist, 79(1), 79–91.

Abstract

Technological advances in the assessment and understanding of speech and language within the domains of automatic speech recognition, natural language processing, and machine learning present a remarkable opportunity for psychologists to learn more about human thought and communication, evaluate a variety of clinical conditions, and predict cognitive and psychological states. These innovations can be leveraged to automate traditionally time-intensive assessment tasks (e.g., educational assessment), provide psychological information and care (e.g., chatbots), and when delivered remotely (e.g., by mobile phone or wearable sensors) promise underserved communities greater access to health care. Indeed, the automatic analysis of speech provides a wealth of information that can be used for patient care in a wide range of settings (e.g., mHealth applications) and for diverse purposes (e.g., behavioral and clinical research, medical tools that are implemented into practice) and patient types (e.g., numerous psychological disorders and in psychiatry and neurology). However, automation of speech analysis is a complex task that requires the integration of several different technologies within a large distributed process with numerous stakeholders. Many organizations have raised awareness about the need for robust systems for ensuring transparency, oversight, and regulation of technologies utilizing artificial intelligence. Since there is limited knowledge about the ethical and legal implications of these applications in psychological science, we provide a balanced view of both the optimism that is widely published on and also the challenges and risks of use, including discrimination and exacerbation of structural inequalities.

Public Significance Statement

Computational advances in the domains of automatic speech recognition, natural language processing, and machine learning allow for the rapid and accurate assessment of a person’s speech for numerous purposes. The widespread adoption of these technologies permits psychologists an opportunity to learn more about psychological function, interact in new ways with research participants and patients, and aid in the diagnosis and management of various cognitive and mental health conditions. However, we argue that the current scope of the APA’s Ethical Principles of Psychologists and Code of Conduct is insufficient to address the ethical issues surrounding the application of artificial intelligence. Such a gap in guidance results in the onus falling directly on psychologists to educate themselves about the ethical and legal implications of these emerging technologies potentially exacerbating the risk of their use in both research and practice.

Friday, June 20, 2025

Artificial intelligence and free will: generative agents utilizing large language models have functional free will

Martela, F. (2025).
AI And Ethics.

Abstract

Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback. Do such generative LLM agents possess free will? Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions. Building on Dennett’s intentional stance and List’s theory of free will, I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior. Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.

Here are some thoughts:

This article explores whether advanced AI systems, particularly generative agents using large language models (LLMs), possess free will. The author argues that while these AI agents may not have “physical free will,” meaning the ability to alter physical causal chains, they do exhibit “functional free will”. Functional free will is defined as the capacity to display intentional agency, recognize genuine alternatives, and control actions based on internal intentions. The article uses examples like Voyager, an AI agent in Minecraft, and Spitenik, a hypothetical autonomous drone, to illustrate how these systems meet the criteria for functional free will.

This research is important for psychologists because it challenges traditional views on free will, which often center on human consciousness and metaphysical considerations. It compels psychologists to reconsider how we attribute agency and decision-making to various entities, including AI, and how this attribution shapes our understanding of behavior

Thursday, June 19, 2025

Large Language Model (LLM) Algorithms in Reshaping Decision-Making and Cognitive Biases in the AI-Leading World: An Experimental Study.

Khatoon, H., Khan, M. L., & Irshad, A. 
(2025, January 22). PsyArXiv

Abstract

The rise of artificial intelligence (AI) has accelerated decision-making since AI algorithmic recommendation may help reduce human limitations while increasing decision accuracy and efficiency. Large language model (LLM) algorithms are designed to enhance human decision-making competencies and remove possible cognitive biases. However, these algorithms can be biased and lead to poor decision-making. Building on previously existing LLM algorithm (i.e., ChatGPT and Perplexity.ai), this study examines whether users who get AI assistance during task-based decision-making have greater decision-making abilities than their peers who employ their own cognitive processes to make decisions. By using domain-independent LLM , incentives, and scenario-based task decisions, we find that the advice suggested by these AIs in the decisive situations were biased and wrong, and that resulted in poor decision outcomes. It has been observed that using public access LLM in crucial situations might result in both ineffective outcomes for the advisee and inadvertent consequences for third parties. Findings highlight the need of having an ethical AI algorithm and the ability to accurately assess trust in order to effectively deploy these systems. This raises concerns regarding the use of AI in decision making with careful assistance.

Here are some thoughts:

This research is important to psychologists because it examines how collaboration with large language models (LLMs) like ChatGPT affects human decision-making, particularly in relation to cognitive biases. By using a modified Adult Decision-Making Competence battery, the study offers empirical data on whether AI assistance improves or impairs judgment. It highlights the psychological dynamics of trust in AI, the risk of overreliance, and the ethical implications of using AI in decisions that impact others. These findings are especially relevant for psychologists interested in cognitive bias, human-technology interaction, and the integration of AI into clinical, organizational, and educational settings.