Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 29, 2025

Moral learning and Decision-Making across the lifespan

Lockwood, P. L., Van Den Bos, W., & Dreher, J. (2024).
Annual Review of Psychology.

Abstract

Moral learning and decision-making are crucial throughout our lives, from infancy to old age. Emerging evidence suggests that there are important differences in learning and decision-making in moral situations across the lifespan, and these are underpinned by co-occurring changes in the use of model-based values and theory of mind. Here, we review the decision neuroscience literature on moral choices and moral learning considering four key concepts. We show how in the earliest years, a sense of self/other distinction is foundational. Sensitivity to intention versus outcome is crucial for several moral concepts and is most similar in our earliest and oldest years. Across all ages, basic shifts in the influence of theory of mind and model-free and model-based learning support moral decision-making. Moving forward, a computational approach to key concepts of morality can help provide a mechanistic account and generate new hypotheses to test across the whole lifespan.

Here are some thoughts:

The article highlights that moral learning and decision-making evolve dynamically throughout the lifespan, with distinct patterns emerging at different developmental stages. From early childhood to old age, individuals shift from rule-based moral reasoning toward more complex evaluations that integrate intentions, outcomes, and social context.

Understanding these developmental trajectories is essential for psychologists, as it informs age-appropriate interventions and expectations regarding moral behavior. Neuroscientific findings reveal that key brain regions such as the ventromedial prefrontal cortex (vmPFC), temporoparietal junction (TPJ), and striatum play critical roles in processing empathy, fairness, guilt, and social norms. These insights help explain how neurological impairments or developmental changes can affect moral judgment, particularly useful in clinical and neuropsychological settings.

Social influence also plays a significant role, especially during adolescence, where peer pressure and reputational concerns strongly shape moral decisions. This has practical implications for therapists working with youth, including strategies to build resilience against antisocial influences and promote prosocial behaviors.

The research further explores how deficits in moral learning are linked to antisocial behaviors, psychopathy, and conduct disorders, offering valuable perspectives for forensic psychology and clinical intervention planning.

Lastly, the article emphasizes the importance of cultural sensitivity, noting that moral norms vary across societies and change over time. For practicing psychologists, this underscores the need to adopt culturally informed approaches when assessing and treating clients from diverse backgrounds.

Monday, July 28, 2025

The Law Meets Psychological Expertise: Eight Best Practices to Improve Forensic Psychological Assessment

Neal, T. M., Martire, K. A.,  et al. (2022).
Annual Review of Law and Social Science, 
18(1), 169–192.


Abstract
 
We review the state of forensic mental health assessment. The field is in much better shape than in the past; however, significant problems of quality remain, with much room for improvement. We provide an overview of forensic psychology's history and discuss its possible future, with multiple audiences in mind. We distill decades of scholarship from and about fundamental basic science and forensic science, clinical and forensic psychology, and the law of expert evidence into eight best practices for the validity of a forensic psychological assessment. We argue these best practices should apply when a psychological assessment relies on the norms, values, and esteem of science to inform legal processes. The eight key considerations include (a) foundational validity of the assessment; (b) validity of the assessment as applied; (c) management and mitigation of bias; (d) attention to quality assurance; (e) appropriate communication of data, results, and opinions; (f) explicit consideration of limitations and assumptions; (g) weighing of alternative views or disagreements; and (h) adherence with ethical obligations, professional guidelines, codes of conduct, and rules of evidence.

Here are some thoughts:

This article outlines eight best practices designed to enhance the quality and validity of forensic psychological assessments. It provides a historical context for forensic psychology, discussing its evolution and future directions. Drawing on extensive research from basic science, forensic science, clinical and forensic psychology, and the law of expert evidence, the authors present key considerations for psychologists conducting assessments in legal settings. These practices include ensuring foundational and applied validity, managing biases, implementing quality assurance, communicating data and opinions appropriately, explicitly considering limitations, weighing alternative perspectives, and adhering to ethical guidelines. The article underscores the importance of these best practices to improve the reliability and scientific rigor of psychological expertise within the legal system.

Sunday, July 27, 2025

Meta-analysis of risk factors for suicide after psychiatric discharge and meta-regression of the duration of follow-up

Tai, A., Pincham, H., Basu, A., & Large, M. (2025).
The Australian and New Zealand journal of psychiatry,
48674251348372. Advance online publication.

Abstract

Background: Rates of suicide following discharge from psychiatric hospitals are extraordinarily high in the first week post-discharge and then decline steeply over time. The aim of this meta-analysis is to evaluate the strength of risk factors for suicide after psychiatric discharge and to investigate the association between the strength of risk factors and duration of study follow-up.

Methods: A PROSPERO-registered meta-analysis of observational studies was performed in accordance with PRISMA guidelines. Post-discharge suicide risk factors reported five or more times were synthesised using a random-effects model. Mixed-effects meta-regression was used to examine whether the strength of suicide risk factors could be explained by duration of study follow-up.

Results: Searches located 83 primary studies. From this, 63 risk estimates were meta-analysed. The strongest risk factors were previous self-harm (odds ratio = 2.75, 95% confidence interval = [2.37, 3.19]), suicidal ideation (odds ratio = 2.15, 95% confidence interval = [1.73, 2.68]), depressive symptoms (odds ratio = 1.84, 95% confidence interval = [1.48, 2.30]), and high-risk categorisation (odds ratio = 7.65, 95% confidence interval = [5.48, 10.67]). Significantly protective factors included age ⩽30, age ⩾65, post-traumatic stress disorder, and dementia. The effect sizes for the strongest post-discharge suicide risk factors did not decline over longer periods of follow-up.

Conclusion: The effect sizes of post-discharge suicide risk factors were generally modest, suggesting that clinical risk factors may have limited value in distinguishing between high-risk and low-risk groups. The highly elevated rates of suicide immediately after discharge and their subsequent decline remain unexplained.

Saturday, July 26, 2025

Reimagining "Multiple Relationships" in Psychotherapy: Decolonial/Liberation Psychologies and Communal Selfhood

Lacerda-Vandenborn, E., et al. (2025).
American Psychologist, 80(4), 522–534.

Abstract

Promoting decolonial and liberation psychologies (DLPs) requires psychologists to critically interrogate taken-for-granted assumptions pertaining to psychotherapy relationships. One fruitful area of interrogation surrounds conceptualizations and practices concerning multiple relationships (MRs), wherein a psychologist and client share another form of relationship outside of the psychotherapy context. The prevention or minimization of MRs is widely viewed as an ethical imperative, codified within professional ethics codes and further encouraged through insurance and liability practices. From the standpoint of DLPs, the profession has not adequately grasped the extent to which psychotherapy relationships reflect individualistic selves that facilitate psychologists’ serving, however unwittingly, as “handmaidens of the status quo.” We present three practitioner testimonios from among our authors—Indigenous, Muslim, and lesbian, gay, bisexual, transgender, queer, questioning, and other sexual/gender minorities—to concretely demonstrate how the professional and ethical framing around this ubiquitous practice within psychology has served to flatten human relationships within a colonizing frame. We then discuss three problematic assumptions concerning MRs that are reflected in the American Psychological Association’s Ethics Code. We offer communal selfhood, a theoretical framework that aligns with DLPs, as a potential space for understanding and reframing MRs. We conclude with general recommendations for conceptualizing therapeutic relationships without recourse to a problematic conceptualization of MRs.

Public Significance Statement

Decolonial and liberation psychologies challenge conventional thinking concerning “multiple relationships” in psychotherapy. Discouragement of multiple relationships reflects an individualistic ideology and risk-aversive managerialism, protecting the profession more than promoting public welfare. Professional and ethical reforms, in line with a “communal selfhood” framework, would reinforce the profession’s commitments toward antiracism and anticolonialism.

Here are some thoughts:

The paper critically examines the traditional ethical stance on "multiple relationships" (MRs) in psychotherapy, arguing that the prevailing individualistic, risk-averse approach is often unsuitable for diverse communities. The article uniquely applies decolonial and liberation psychologies (DLPs) to challenge these Western-centric norms, advocating for a "communal selfhood" framework. It stands out by featuring compelling practitioner testimonios from Indigenous, Muslim, and LGBTQ+ psychologists, illustrating how rigid MR prohibitions can be detrimental in community-oriented contexts where interconnected relationships are vital for trust and healing. The article not only critiques existing guidelines but also offers recommendations for systemic reform, aiming to foster antiracism and anticolonialism within the psychology profession.

Friday, July 25, 2025

Crossing the Line: Daubert, Dual Roles, and the Admissibility of Forensic Mental Health Testimony

Gordon, S. G. (2016).
SSRN Electronic Journal.
Scholarly Works. 969.

Abstract

Psychiatrists and other mental health professionals often testify as forensic experts in civil commitment and criminal competency proceedings. When an individual clinician assumes both a treatment and a forensic role in the context of a single case, however, that clinician forms a dual relationship with the patient—a practice that creates a conflict of interest and violates professional ethical guidelines. The court, the parties, and the patient are all affected by this conflict and the biased testimony that may result from dual relationships. When providing forensic testimony, the mental health professional’s primary duty is to the court, not to the patient, and she has an obligation to give objective and truthful testimony. But this testimony can result in the patient’s detention or punishment, a legal outcome that implicates the mental health professional’s corresponding obligation to “do no harm” to the patient. Moreover, the conflict of interest created by a dual relationship can affect the objectivity and reliability of forensic testimony.

A dual clinical and forensic relationship with a single patient is contrary to quality patient care, and existing clinical and forensic ethical guidelines strongly discourage the practice. Notwithstanding the mental health community’s general consensus about the impropriety of the practice, many courts do not question the mental health professional’s ability to provide forensic testimony for a patient with whom she has a simultaneous clinical relationship. Moreover, some state statutes require or encourage clinicians at state-run facilities to engage in these multiple roles. This Article argues that the inherent conflict created by these dual roles does not provide a reliable basis for forensic mental health testimony under Federal Rule of Evidence 702 and should not be admitted as reliable expert testimony by courts. Because dual relationships are often initiated due to provider shortages and the unavailability of neutral forensic examiners, this Article will also discuss the use of telemedicine as a way to provide forensic evaluations in underserved areas, especially those where provider shortages have prompted mental health professionals to engage in dual clinical and forensic roles. Finally, this Article argues that courts should exercise their powers more broadly under Federal Rule of Evidence 706 to appoint neutral and independent mental health experts to conduct forensic evaluations in civil commitment and criminal competency proceedings.

Here are some thoughts:

The article explores the ethical and legal complexities surrounding mental health professionals who serve in dual roles—both as clinicians and forensic evaluators. The article highlights how these dual relationships can compromise objectivity and reliability in forensic testimony, a concern widely recognized within the psychiatric and psychological communities. Despite professional ethical codes discouraging such practices, courts often fail to exclude testimony from clinicians offering forensic opinions about their own patients. This inconsistency is particularly problematic under the Daubert standard, which mandates that trial judges act as gatekeepers to ensure expert testimony is both relevant and reliable. The piece argues that violating professional ethical norms—such as those against dual relationships—should be considered when evaluating the admissibility of forensic mental health testimony, especially since these violations are seen as markers of unreliability by the relevant scientific community. Additionally, the article touches on the practical implications of these dual role dilemmas, including the impact on patient care, legal outcomes, and the integrity of the judicial process. It concludes with a call for courts to take professional ethics more seriously when assessing the admissibility of expert testimony in forensic mental health cases.

Thursday, July 24, 2025

The uselessness of AI ethics

Munn, L. (2022).
AI And Ethics, 3(3), 869–877.

Abstract

As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.

Here are some thoughts:

This paper is important for multiple reasons. First, it critically examines how artificial intelligence—increasingly embedded in areas like healthcare, education, law enforcement, and social services—can perpetuate racial, gendered, and socioeconomic biases, often under the guise of neutrality and objectivity. These systems can influence or even determine outcomes in mental health diagnostics, hiring practices, criminal justice risk assessments, and educational tracking, all of which have profound psychological implications for individuals and communities. Psychologists, particularly those working in clinical, organizational, or forensic fields, must understand how these technologies shape behavior, identity, and access to resources.

Second, the article highlights how ethical principles guiding AI development are often vague, inconsistently applied, and disconnected from real-world impacts. This raises concerns about the psychological effects of deploying systems that claim to promote fairness or well-being but may actually deepen inequalities or erode trust in institutions. For psychologists involved in policy-making or advocacy, this underscores the need to push for more robust, evidence-based frameworks that consider human behavior, cultural context, and systemic oppression.

Finally, the piece calls attention to the broader sociopolitical systems in which AI operates, urging a shift from abstract ethical statements to concrete actions that address structural inequities. This aligns with growing interest in community psychology and critical approaches that emphasize social justice and the importance of centering marginalized voices. Ultimately, understanding the limitations and risks of current AI ethics frameworks allows psychologists to better advocate for humane, equitable, and psychologically informed technological practices.

Wednesday, July 23, 2025

Pharmacotherapy for post-traumatic stress disorder: systematic review and meta-analysis

Jia, Y., Ye, Z., et al. (2025).
Therapeutic advances in psychopharmacology,
15, 20451253251342628.

Abstract

Background: Post-traumatic stress disorder (PTSD) is a prevalent mental illness with a high disability rate. The neurobiological abnormalities in PTSD suggest that drug therapy may have certain therapeutic effects. According to the recommendations of clinical guidelines for PTSD, the current clinical preference is for selective serotonin reuptake inhibitors (SSRIs) or serotonin and norepinephrine reuptake inhibitors (SNRIs). Nevertheless, the efficacy of other types of drugs remains uncertain, which impacts the selection of personalized treatment for patients.

Objectives: The aim of this meta-analysis was to assess the efficacy and acceptability of drugs with different pharmacological mechanisms in alleviating PTSD symptoms by comparing the response rates and dropout rates of different drug treatment groups in randomized clinical trials.

Design: Systematic review and meta-analysis.

Methods: We searched and analyzed 52 reports that described the efficacy and acceptability of medication for PTSD. Among these, 49 trials used the dropout rate as an acceptability indicator, and 52 trials used the response rate as an efficacy indicator.

Results: In the 49 trials with the dropout rate as the indicator, the dropout rate was 29% (95% confidence interval, 0.26-0.33; n = 3870). In the 52 trials with the response rate as the indicator, the response rate was 39% (95% confidence interval, 0.33-0.45; n = 3808). After drug treatment, the core symptoms of PTSD were significantly improved. This meta-analysis indicated that there was no significant difference between antidepressants and antipsychotics in improving clinical symptoms and acceptability. However, antidepressants may have a slight advantage in efficacy, although with a higher dropout rate.

Conclusion: Drug treatment is an effective rehabilitation method for PTSD patients, and individualized drug management should be considered.

Plain language summary

The purpose of this study was to assess the acceptability and efficacy of all types of pharmacotherapeutic agents in reducing the symptoms of PTSD. In this systematic meta-analysis, the dropout and response rates of various pharmacotherapy groups reported by randomized clinical trials were compared. A total of 52 reports that described the acceptability and efficacy of PTSD pharmacotherapies were retrieved and analyzed. This meta-analysis supports antidepressants and antipsychotics have no significant difference
in improving clinical symptoms and acceptabity, however, AAs may has a slight advantage tendency in efficacy, albeit with a higher dropout rate, so individualized drug management should be considered.

Tuesday, July 22, 2025

Technology ethics assessment: Politicising the ‘Socratic approach.’

Sparrow, R. (2023).
Business Ethics the Environment &
Responsibility, 32(2), 454–466.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Here are some prompts:

This article is important to psychologists because it offers a structured, politically aware framework—the Socratic approach—for evaluating the ethical implications of technology. It emphasizes how technologies are not neutral but can reinforce power imbalances, deepen social inequalities, and reshape human behavior and relationships. For psychologists working in areas such as human-computer interaction, organizational behavior, or digital well-being, this tool supports critical reflection on how technological design influences users' autonomy, identity, and social dynamics. By integrating political dimensions into ethical assessment, the article encourages psychologists to consider broader societal impacts, including issues of justice, inclusion, and long-term consequences, making it especially relevant in an era of rapid technological change.

Monday, July 21, 2025

Emotion and deliberative reasoning in moral judgment.

Cummins, D. D., & Cummins, R. C. (2012).
Frontiers in psychology, 3, 328.

Abstract

According to an influential dual-process model, a moral judgment is the outcome of a rapid, affect-laden process and a slower, deliberative process. If these outputs conflict, decision time is increased in order to resolve the conflict. Violations of deontological principles proscribing the use of personal force to inflict intentional harm are presumed to elicit negative affect which biases judgments early in the decision-making process. This model was tested in three experiments. Moral dilemmas were classified using (a) decision time and consensus as measures of system conflict and (b) the aforementioned deontological criteria. In Experiment 1, decision time was either unlimited or reduced. The dilemmas asked whether it was appropriate to take a morally questionable action to produce a “greater good” outcome. Limiting decision time reduced the proportion of utilitarian (“yes”) decisions, but contrary to the model’s predictions, (a) vignettes that involved more deontological violations logged faster decision times, and (b) violation of deontological principles was not predictive of decisional conflict profiles. Experiment 2 ruled out the possibility that time pressure simply makes people more like to say “no.” Participants made a first decision under time constraints and a second decision under no time constraints. One group was asked whether it was appropriate to take the morally questionable action while a second group was asked whether it was appropriate to refuse to take the action. The results replicated that of Experiment 1 regardless of whether “yes” or “no” constituted a utilitarian decision. In Experiment 3, participants rated the pleasantness of positive visual stimuli prior to making a decision. Contrary to the model’s predictions, the number of deontological decisions increased in the positive affect rating group compared to a group that engaged in a cognitive task or a control group that engaged in neither task. These results are consistent with the view that early moral judgments are influenced by affect. But they are inconsistent with the view that (a) violation of deontological principles are predictive of differences in early, affect-based judgment or that (b) engaging in tasks that are inconsistent with the negative emotional responses elicited by such violations diminishes their impact.

Here are some thoughts:

This research investigates the role of emotion and cognitive processes in moral decision-making, testing a dual-process model that posits moral judgments arise from a conflict between rapid, affect-driven (System 1) and slower, deliberative (System 2) processes. Across three experiments, participants were presented with moral dilemmas involving utilitarian outcomes (sacrificing few to save many) and deontological violations (using personal force to intentionally harm), with decision times manipulated to assess how these factors influence judgment. The findings challenge the assumption that deontological decisions are always driven by fast emotional responses: while limiting decision time generally reduced utilitarian judgments, exposure to pleasant emotional stimuli unexpectedly increased deontological responses, suggesting that emotional context, not just negative affect from deontological violations, plays a significant role. Additionally, decisional conflict—marked by low consensus and long decision times—was not fully predicted by deontological criteria, indicating other factors influence moral judgment. Overall, the study supports a dual-process framework but highlights the complexity of emotion's role, showing that both utilitarian and deontological judgments can be influenced by affective states and intuitive heuristics rather than purely deliberative reasoning.

Sunday, July 20, 2025

Milgram shock-study imaginal replication: how far do you think you would go?

Mazzocco, P. J., Reitler, K., et al. (2025).
Current Psychology.

Abstract

Online adult participants (N = 414) read a gripping first-person account of the classic 1963 Milgram shock study and were asked to predict the responses of both themselves and “the average person”. Prior to making predictions, half were told that 65% of participants exhibited complete obedience throughout the duration of the original study, whereas another half were given no information about the results. In general, participants predicted much less obedience than was shown in the actual Milgram study. In addition, consistent with the better-than-average effect, participants predicted significantly more personal disobedience in response to the scenario compared to their average person predictions. Prior knowledge of the Milgram study did not significantly impact participants’ predictions about their own behavior in an identical scenario. These results suggest that adults are unable or unwilling to incorporate social scientific research, specifically the Milgram obedience findings, into perceptions of their own likely behavior.

Here are some thoughts:

This research is an extension of Milgram’s classic obedience experiments, focusing on how individuals predict their own and others’ behavior in morally challenging situations involving authority. It is relevant to the practice of psychology because it explores core concepts such as obedience, moral decision-making, and social influence, which are central to understanding human behavior in social contexts.

The study investigates how people perceive their susceptibility to situational pressures and highlights cognitive biases such as the better-than-average effect, where individuals believe they are more likely to resist harmful obedience than the average person. This has implications for ethics training and interventions aimed at promoting moral courage and resistance to undue authority. Furthermore, the research contributes to understanding individual differences—such as personality traits and authoritarian tendencies—that may moderate responses to authority figures.

Saturday, July 19, 2025

Morality on the road: the ADC model in low-stakes traffic vignettes

Pflanzer, M., Cecchini, D., Cacace, S.,
& Dubljević, V. (2025).
Frontiers in Psychology, 16.

Introduction: In recent years, the ethical implications of traffic decision-making, particularly in the context of autonomous vehicles (AVs), have garnered significant attention. While much of the existing research has focused on high-stakes moral dilemmas, such as those exemplified by the trolley problem, everyday traffic situations—characterized by mundane, low-stakes decisions—remain underexplored.

Methods: This study addresses this gap by empirically investigating the applicability of the Agent-Deed-Consequences (ADC) model in the moral judgment of low-stakes traffic scenarios. Using a vignette approach, we surveyed professional philosophers to examine how their moral judgments are influenced by the character of the driver (Agent), their adherence to traffic rules (Deed), and the outcomes of their actions (Consequences).

Results: Our findings support the primary hypothesis that each component of the ADC model significantly influences moral judgment, with positive valences in agents, deeds, and consequences leading to greater moral acceptability. We additionally explored whether participants’ normative ethical leanings–classified as deontological, utilitarian, or virtue ethics–influenced how they weighted ADC components. However, no moderating effects of moral preference were observed. The results also reveal interaction effects among some components, illustrating the complexity of moral reasoning in traffic situations.

Discussion: The study’s implications are crucial for the ethical programming of AVs, suggesting that these systems should be designed to navigate not only high-stakes dilemmas but also the nuanced moral landscape of everyday driving. Our work creates a foundation for stakeholders to integrate human moral judgments into AV decision-making algorithms. Future research should build on these findings by including a more diverse range of participants and exploring the generalizability of the ADC model across different cultural contexts.

Here are some thoughts on the modern day trolley problem:

This article presents an alternative to the trolley problem framework for understanding moral decision-making in traffic scenarios, particularly for autonomous vehicle programming. While trolley problem research focuses on high-stakes, life-or-death dilemmas where one must choose between unavoidable harms, the authors argue this approach oversimplifies real-world traffic scenarios and lacks ecological validity. Instead, they propose the Agent-Deed-Consequences (ADC) model, which evaluates moral judgment based on three components: the character and intentions of the driver (Agent), their compliance with traffic rules (Deed), and the outcome of their actions (Consequences). The study surveyed 274 professional philosophers using low-stakes traffic vignettes and found that all three ADC components significantly influence moral judgment, with rule-following having the strongest effect, followed by character and outcomes. Notably, philosophers with different ethical frameworks (utilitarian, deontological, virtue ethics) showed similar judgment patterns, suggesting broad consensus on traffic morality. The researchers argue that "moral decision-making in everyday situations may contribute to the prevention of high-stakes emergencies, which do not arise without mundane bad decisions happening first," emphasizing that autonomous vehicles should be programmed to handle the nuanced moral landscape of ordinary driving decisions rather than just extreme emergency scenarios. This approach integrates virtue ethics, deontological ethics, and consequentialist considerations into a comprehensive framework that better reflects the complexity of real-world traffic moral reasoning.

Friday, July 18, 2025

Adversarial testing of global neuronal workspace and integrated information theories of consciousness

Ferrante, O., et al,. (2025).
Nature.

Abstract

Different theories explain how subjective experience arises from brain activity. These theories have independently accrued evidence, but have not been directly compared. Here we present an open science adversarial collaboration directly juxtaposing integrated information theory (IIT) and global neuronal workspace theory (GNWT) via a theory-neutral consortium. The theory proponents and the consortium developed and preregistered the experimental design, divergent predictions, expected outcomes and interpretation thereof. Human participants (n = 256) viewed suprathreshold stimuli for variable durations while neural activity was measured with functional magnetic resonance imaging, magnetoencephalography and intracranial electroencephalography. We found information about conscious content in visual, ventrotemporal and inferior frontal cortex, with sustained responses in occipital and lateral temporal cortex reflecting stimulus duration, and content-specific synchronization between frontal and early visual areas. These results align with some predictions of IIT and GNWT, while substantially challenging key tenets of both theories. For IIT, a lack of sustained synchronization within the posterior cortex contradicts the claim that network connectivity specifies consciousness. GNWT is challenged by the general lack of ignition at stimulus offset and limited representation of certain conscious dimensions in the prefrontal cortex. These challenges extend to other theories of consciousness that share some of the predictions tested here. Beyond challenging the theories, we present an alternative approach to advance cognitive neuroscience through principled, theory-driven, collaborative research and highlight the need for a quantitative framework for systematic theory testing and building.

Here are some thoughts:

This research explores a major collaborative effort to empirically test two leading theories of consciousness: Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). These theories represent two of the most prominent perspectives among the more than 200 ideas currently proposed to explain how subjective experience arises from brain activity. GNWT suggests that consciousness occurs when information is globally broadcast across the brain, particularly involving the prefrontal cortex. In contrast, IIT posits that consciousness corresponds to the integration of information in the brain, especially within the posterior cortex.

To evaluate these theories, the Cogitate Consortium organized an “adversarial collaboration,” in which proponents of both theories, along with neutral researchers, agreed on specific, testable predictions derived from each model. IIT predicted that conscious experience should involve sustained synchronization of activity in the posterior cortex, while GNWT predicted that consciousness would involve a “neural ignition” process and that conscious content could be decoded from the prefrontal cortex. These hypotheses were tested across several labs using consistent experimental protocols.

The findings, however, were inconclusive. The data did not reveal the sustained posterior synchronization expected by IIT, nor did it consistently support GNWT’s predictions about prefrontal cortex activity and neural ignition. Although the results presented challenges for both theories, they did not decisively support or refute either one. Importantly, the study marked a significant step forward in the scientific investigation of consciousness. It demonstrated the value of collaborative, theory-neutral research and addressed a long-standing problem in consciousness science—namely, that most studies have been conducted by proponents of specific theories, often resulting in confirmation bias.

The project was also shaped by insights from psychologist Daniel Kahneman, who pioneered the idea of adversarial collaboration. He noted that scientists are rarely persuaded to abandon their theories even in the face of counter-evidence. While this kind of theoretical stubbornness might seem like a flaw, the article argues it can be productive when managed within a collaborative and self-correcting scientific culture. Ultimately, the study underscores how difficult it is to unravel the nature of consciousness and suggests that progress may require both improved experimental methods and potentially a conceptual revolution. Still, by embracing open collaboration, the scientific community has taken a crucial step toward better understanding one of the most complex problems in science.

Thursday, July 17, 2025

Cognitive bias and how to improve sustainable decision making

Korteling, J. E. H., Paradies, G. L., &
Sassen-van Meer, J. P. (2023). 
Frontiers in psychology, 14, 1129835.

Abstract

The rapid advances of science and technology have provided a large part of the world with all conceivable needs and comfort. However, this welfare comes with serious threats to the planet and many of its inhabitants. An enormous amount of scientific evidence points at global warming, mass destruction of bio-diversity, scarce resources, health risks, and pollution all over the world. These facts are generally acknowledged nowadays, not only by scientists, but also by the majority of politicians and citizens. Nevertheless, this understanding has caused insufficient changes in our decision making and behavior to preserve our natural resources and to prevent upcoming (natural) disasters. In the present study, we try to explain how systematic tendencies or distortions in human judgment and decision-making, known as “cognitive biases,” contribute to this situation. A large body of literature shows how cognitive biases affect the outcome of our deliberations. In natural and primordial situations, they may lead to quick, practical, and satisfying decisions, but these decisions may be poor and risky in a broad range of modern, complex, and long-term challenges, like climate change or pandemic prevention. We first briefly present the social-psychological characteristics that are inherent to (or typical for) most sustainability issues. These are: experiential vagueness, long-term effects, complexity and uncertainty, threat of the status quo, threat of social status, personal vs. community interest, and group pressure. For each of these characteristics, we describe how this relates to cognitive biases, from a neuro-evolutionary point of view, and how these evolved biases may affect sustainable choices or behaviors of people. Finally, based on this knowledge, we describe influence techniques (interventions, nudges, incentives) to mitigate or capitalize on these biases in order to foster more sustainable choices and behaviors.

Here are some thoughts:

The article explores why, despite widespread scientific knowledge and public awareness of urgent sustainability issues such as climate change, biodiversity loss, and pollution, there is still insufficient behavioral and policy change to effectively address these problems. The authors argue that cognitive biases--systematic errors in human thinking-play a significant role in hindering sustainable decision--making. These biases evolved to help humans make quick decisions in immediate, simple contexts but are poorly suited for the complex, long-term, and abstract nature of sustainability challenges.

Sustainability issues have several psychological characteristics that make them particularly vulnerable to cognitive biases. These include experiential vagueness, where problems develop slowly and are difficult to perceive directly; long-term effects, where benefits of sustainable actions are delayed while costs are immediate; complexity and uncertainty; threats to the status quo and social standing; conflicts between personal and community interests; and social pressures that discourage sustainable behavior. The article highlights specific cognitive biases linked to these characteristics, such as hyperbolic discounting (the preference for immediate rewards over future benefits), normalcy bias (underestimating the likelihood and impact of disasters), and the tragedy of the commons (prioritizing personal gain over collective welfare), along with others like confirmation bias, the endowment effect, and sunk-cost fallacy, all of which skew judgment and impede sustainable choices.

To address these challenges, the authors recommend interventions that leverage or counteract these biases through environmental and contextual changes rather than solely relying on education or bias training. Techniques such as nudges, incentives, framing effects, and emphasizing benefits to family or in-groups can make sustainable choices easier and more appealing. The key takeaway is that understanding and addressing cognitive biases is essential for improving sustainable decision-making at both individual and policy levels. Policymakers and organizations should design interventions that account for human psychological tendencies to foster more sustainable behaviors effectively.

Wednesday, July 16, 2025

The moral blueprint is not necessary for STEM wisdom

Kachhiyapatel, N., & Grossmann, I. (2025, June 11).
PsyArXiv

Abstract

How can one bring wisdom into STEM education? One popular position holds that wise judgment follows from teaching morals and ethics in STEM. However, wisdom scholars debate the causal role of morality and whether cultivating a moral blueprint is a necessary condition for wisdom. Some philosophers and education scientists champion this view, whereas social psychologists and cognitive scientists argue that moral features like prosocial behavior are reinforcing factors or outcomes of wise judgment rather than pre-requisites. This debate matters particularly for science and technology, where wisdom-demanding decisions typically involve incommensurable values and radical uncertainty. Here, we evaluate these competing positions through four lines of evidence. First, empirical research shows that heightened moralization aligns with foolish rejection of scientific claims, political polarization, and value extremism. Second, economic scholarship on folk theorems demonstrates that wisdom-related metacognition—perspective-integration, context-sensitivity, and balancing long- and short-term goals—can give rise to prosocial behavior without an apriori moral blueprint. Third, in real life moral values often compete, making metacognition indispensable to balance competing interests for the common good. Fourth, numerous scientific domains require wisdom yet operate beyond moral considerations. We address potential objections about immoral and Machiavellian applications of blueprint-free wisdom accounts. Finally, we explore implications for giftedness: what exceptional wisdom looks like in STEM context, and how to train it. Our analysis suggests that STEM wisdom emerges not from prescribed moral codes but from metacognitive skills that enable navigation of complexity and uncertainty.

Here are some thoughts:

This article challenges the idea that wisdom in STEM and other complex domains requires a fixed moral blueprint. Instead, it highlights perspectival metacognition—skills like perspective-taking, intellectual humility, and balancing short- and long-term outcomes—as the core of wise judgment.

For psychologists, this suggests that strong moral convictions alone can sometimes impair wisdom by fostering rigidity or polarization. The findings support a shift in ethics training, supervision, and professional development toward cultivating reflective, context-sensitive thinking. Rather than relying on standardized assessments or fixed values, fostering metacognitive skills may better prepare psychologists and their clients to navigate complex, high-stakes decisions with wisdom and flexibility.

Tuesday, July 15, 2025

Medical AI and Clinician Surveillance — The Risk of Becoming Quantified Workers

Cohen, I. G., Ajunwa, I., & Parikh, R. B. (2025).
New England Journal of Medicine.
Advance online publication.

Here is an excerpt:

There are several ways in which AI-based monitoring tools designed to benefit patients and clinicians might be used for clinician surveillance. First, ambient AI scribe tools, which transcribe and interpret patient and clinician speech to generate a structured note, have been rapidly adopted with a goal of reducing the burden associated with documentation and improving documentation accuracy. But ambient dictation systems introduce new capabilities for monitoring clinicians. By analyzing speech patterns, sentiment, and content, health care systems could use AI scribes to assess how often clinicians’ recommendations deviate from institutional guidelines.

In addition, these systems could detect “efficiency outliers” — clinicians who spend more time conversing with patients than employers consider ideal, at the expense of conducting new-patient visits or more total visits. Ambient monitoring is especially worrisome, given cases of employers terminating the contracts of physicians who didn’t meet visit-time expectations. Akin to automated quality-improvement dashboards for tracking adherence to chronic-disease–management standards, AI models may generate performance scores on the basis of adherence to scripted protocols, average time spent with each patient, or degree of shared decision making, which could be inferred with the use of linguistic analysis. Even if these metrics are established to support quality-improvement goals, hospitals and health care systems could leverage them for evaluations of clinicians or performance-based reimbursement adjustments.

Here are some thoughts:

This article is important to psychologists as it explores the psychological and ethical ramifications of AI-driven surveillance in healthcare, which parallels concerns in mental health practice. The quantification of clinicians through tools like ambient scribes and communication analytics threatens professional autonomy, potentially leading to burnout, stress, and reduced job satisfaction—key areas of study in occupational and health psychology. Additionally, the tension between algorithmic conformity and individualized care mirrors challenges in therapeutic settings, where standardized protocols may conflict with personalized treatment approaches. Psychologists can contribute expertise in human behavior, workplace dynamics, and ethical frameworks to advocate for balanced AI integration that prioritizes clinician well-being and patient-centered care. The article also highlights equity issues, as surveillance may disproportionately affect marginalized clinicians, aligning with psychology’s focus on systemic inequities.

Monday, July 14, 2025

Promises and pitfalls of large language models in psychiatric diagnosis and knowledge tasks

Bang, C.-B., Jung, Y.-C. et al. (2025).
The British Journal of Psychiatry,
226(4), 243–244.

Abstract:

This study evaluates the performance of five large language models (LLMs), including GPT-4, in psychiatric diagnosis and knowledge tasks using a zero-shot approach. Compared to 11 psychiatry residents, GPT-4 demonstrated superior accuracy in diagnostic (F1 score: 63.41% vs. 47.43%) and knowledge tasks (85.05% vs. 62.01%). However, GPT-4 exhibited higher comorbidity error rates (30.48% vs. 0.87%), suggesting limitations in contextual understanding. When residents received GPT-4 guidance, their performance improved significantly without increasing critical errors. The findings highlight the potential of LLMs as clinical aids but underscore the need for careful integration to preserve human expertise and mitigate risks like over-reliance. Future research should compare LLMs with board-certified psychiatrists and explore multifaceted diagnostic frameworks.

Here are some thoughts:

For psychologists, these findings underscore the importance of balancing AI-assisted efficiency with human judgment. While LLMs could serve as valuable training aids or supplemental tools, their limitations emphasize the irreplaceable role of psychologists in interpreting complex patient narratives, cultural factors, and individualized care. Additionally, the study raises ethical considerations about over-reliance on AI, urging psychologists to maintain rigorous critical thinking and therapeutic rapport. Ultimately, this research calls for a thoughtful, evidence-based approach to integrating AI into mental health practice—one that leverages technological advancements while preserving the human elements essential to effective psychological care.

Sunday, July 13, 2025

ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it

Anthony Cuthbertson
The Independent
Originally posted 6 July 25

Here is an excerpt:

“There have already been deaths from the use of commercially available bots,” they noted. “We argue that the stakes of LLMs-as-therapists outweigh their justification and call for precautionary restrictions.”

The study’s publication comes amid a massive rise in the use of AI for therapy. Writing in The Independent last week, psychotherapist Caron Evans noted that a “quiet revolution” is underway with how people are approaching mental health, with artificial intelligence offering a cheap and easy option to avoid professional treatment.

“From what I’ve seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world,” she wrote. “Not by design, but by demand.”

The Stanford study found that the dangers involved with using AI bots for this purpose arise from their tendency to agree with users, even if what they’re saying is wrong or potentially harmful. This sycophancy is an issue that OpenAI acknowledged in a May blog post, which detailed how the latest ChatGPT had become “overly supportive but disingenuous”, leading to the chatbot “validating doubts, fueling anger, urging impulsive decisions, or reinforcing negative emotions”.

While ChatGPT was not specifically designed to be used for this purpose, dozens of apps have appeared in recent months that claim to serve as an AI therapist. Some established organisations have even turned to the technology – sometimes with disastrous consequences. In 2023, the National Eating Disorders Association in the US was forced to shut down its AI chatbot Tessa after it began offering users weight loss advice.


Here are some thoughts:

The article warns that AI chatbots like ChatGPT are increasingly being used for mental health support, often with dangerous consequences. A Stanford study found that these chatbots can validate harmful thoughts, reinforce negative emotions, and provide unsafe information—escalating crises like suicidal ideation, mania, and psychosis. Real-world cases include a Florida man with schizophrenia who became obsessed with an AI-generated persona and later died in a police confrontation. Experts warn of a phenomenon called “chatbot psychosis,” where AI interactions intensify delusions in vulnerable individuals. Despite growing awareness, OpenAI has not adequately addressed the risks, and researchers call for urgent restrictions on using AI as a therapeutic tool. While companies like Meta see AI as the future of mental health care, critics stress that more data alone won't solve the problem, and current safeguards are insufficient.

Saturday, July 12, 2025

Brain-Inspired Affective Empathy Computational Model and Its Application on Altruistic Rescue Task

Feng, H., Zeng, Y., & Lu, E. (2022).
Frontiers in computational neuroscience,
16, 784967.

Abstract

Affective empathy is an indispensable ability for humans and other species' harmonious social lives, motivating altruistic behavior, such as consolation and aid-giving. How to build an affective empathy computational model has attracted extensive attention in recent years. Most affective empathy models focus on the recognition and simulation of facial expressions or emotional speech of humans, namely Affective Computing. However, these studies lack the guidance of neural mechanisms of affective empathy. From a neuroscience perspective, affective empathy is formed gradually during the individual development process: experiencing own emotion-forming the corresponding Mirror Neuron System (MNS)-understanding the emotions of others through the mirror mechanism. Inspired by this neural mechanism, we constructed a brain-inspired affective empathy computational model, this model contains two submodels: (1) We designed an Artificial Pain Model inspired by the Free Energy Principle (FEP) to the simulate pain generation process in living organisms. (2) We build an affective empathy spiking neural network (AE-SNN) that simulates the mirror mechanism of MNS and has self-other differentiation ability. We apply the brain-inspired affective empathy computational model to the pain empathy and altruistic rescue task to achieve the rescue of companions by intelligent agents. To the best of our knowledge, our study is the first one to reproduce the emergence process of mirror neurons and anti-mirror neurons in the SNN field. Compared with traditional affective empathy computational models, our model is more biologically plausible, and it provides a new perspective for achieving artificial affective empathy, which has special potential for the social robots field in the future.

Here are some thoughts:

This article is significant because it highlights a growing effort to imbue machines with complex human-like experiences and behaviors, such as pain and altruism—traits that are deeply rooted in human psychology and evolution. By attempting to program pain, researchers are not merely simulating a sensory reaction but exploring how discomfort or negative feedback might influence learning, decision-making, and self-preservation in AI systems.

This has profound psychological implications, as it touches on how emotions and aversive experiences shape behavior and consciousness in humans. Similarly, programming altruism raises questions about the nature of empathy, cooperation, and moral reasoning—core areas of interest in social and cognitive psychology. Understanding how these traits can be modeled in AI helps psychologists explore the boundaries of machine autonomy, ethical behavior, and the potential consequences of creating entities that mimic human emotional and moral capacities. The broader implication is that this research challenges traditional psychological concepts of mind, consciousness, and ethics, while also prompting critical discussions about how such AI systems might interact with and influence human societies in the future.

Friday, July 11, 2025

Artificial intelligence in psychological practice: Applications, ethical considerations, and recommendations

Hutnyan, M., & Gottlieb, M. C. (2025).
Professional Psychology: Research and Practice.
Advance online publication.

Abstract

Artificial intelligence (AI) systems are increasingly relied upon in the delivery of health care services traditionally provided solely by humans, and the widespread use of AI in the routine practice of professional psychology is on the horizon. It is incumbent on practicing psychologists to be prepared to effectively implement AI technologies and engage in thoughtful discourse regarding the ethical and responsible development, implementation, and regulation of these technologies. This article provides a brief overview of what AI is and how it works, a description of its current and potential future applications in professional practice, and a discussion of the ethical implications of using AI systems in the delivery of psychological services. Applications of AI technologies in key areas of clinical practice are addressed, including assessment and intervention. Using the Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017) as a framework, anticipated ethical challenges across five domains—harm and nonmaleficence, autonomy and informed consent, fidelity and responsibility, privacy and confidentiality, and bias, respect, and justice—are discussed. Based on these challenges, provisional recommendations for psychologists are provided.

Impact Statement

This article provides an overview of artificial intelligence (AI) and how it works, describes current and developing applications of AI in the practice of professional psychology, and explores the potential ethical challenges of using these technologies in the delivery of psychological services. The use of AI in professional psychology has many potential benefits, but it also has drawbacks; ethical psychologists are wise to carefully consider their use of AI in practice.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.

Wednesday, July 9, 2025

Management of Suicidal Thoughts and Behaviors in Youth. Systematic Review

Sim L, Wang Z, et al (2025).
Prepared by the Mayo Clinic Evidence-based 
Practice Center under

Abstract

Background: Suicide is a leading cause of death in young people and an escalating public health crisis. We aimed to assess the effectiveness and harms of available treatments for suicidal thoughts and behaviors in youths at heightened risk for suicide. We also aimed to examine how social determinants of health, racism, disparities, care delivery methods, and patient demographics affect outcomes.

Methods: We conducted a systematic review and searched several databases including MEDLINE®, Embase®, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and others from January 2000 to September 2024. We included randomized clinical trials (RCTs), comparative observational studies, and before-after studies of psychosocial interventions, pharmacological interventions, neurotherapeutics, emerging therapies, and combinations therapies. Eligible patients were youths (aged 5 to 24 years) who had a heightened risk for suicide, including youths who have experienced suicidal ideation, prior attempts, hospital discharge for mental health treatment, or command hallucinations; were identified as high risk on validated questionnaires; or were from other at-risk groups. Pairs of independent reviewers selected and appraised studies. Findings were synthesized narratively.

Results: We included 65 studies reporting on 14,534 patients (33 RCTs, 13 comparative observational studies, and 19 before-after studies). Psychosocial interventions identified from the studies comprised psychotherapy interventions (33 studies, Cognitive Behavior Therapy, Dialectical Behavior Therapy, Collaborative Assessment and Management of Suicidality, Dynamic Deconstructive Psychotherapy, Attachment-Based Family Therapy, and Family-Focused Therapy), acute (i.e., 1 to 4 sessions/contacts) psychosocial interventions (19 studies, acute safety planning, family-based crisis management, motivational interviewing crisis interventions, continuity of care following crisis, and brief adjunctive treatments), and school/community-based psychosocial interventions (13 studies, social network interventions, school-based skills interventions, suicide awareness/gatekeeper programs, and community-based, culturally tailored adjunct programs). For most categories of psychotherapies (except DBT), acute interventions, or school/community-based interventions, there was insufficient strength of evidence and uncertainty about suicidal thoughts or attempts. None of the studies evaluated adverse events associated with the interventions. The evidence base on pharmacological treatment for suicidal youths was largely nonexistent at the present time. No eligible study evaluated neurotherapeutics or emerging therapies.

Conclusion: The current evidence on available interventions intended for youths at heightened risk of suicide is uncertain. Medication, neurotherapeutics, and emerging therapies remain unstudied in this population. Given that most treatments were adapted from adult protocols that may not fit the developmental and contextual experience of adolescents or younger children, this limited evidence base calls for the development of novel, developmentally and trauma-informed treatments, as well as multilevel interventions to address the rising suicide risk in youths.

Tuesday, July 8, 2025

Behavioral Ethics: Ethical Practice Is More Than Memorizing Compliance Codes

Cicero F. R. (2021).
Behavior analysis in practice, 14(4), 
1169–1178.

Abstract

Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.

Here are some thoughts:

This article argues that ethical practice requires more than memorizing compliance codes, as professionals aware of such codes still commit ethical breaches. Behavioral ethics suggests that environmental and situational variables often evoke and maintain unethical decisions, conceptualizing these decisions as operant behavior. Thus, knowledge of ethical codes alone is insufficient to prevent unethical practice; an assessment of environmental influences is necessary. The paper translates behavioral ethics constructs like self-serving bias, incrementalism, framing, obedience to authority, conformity bias, and overconfidence bias into behavior-analytic principles such as reinforcement, shaping, motivating operations, and stimulus control. This perspective shifts the focus from blaming individuals towards analyzing environmental factors that prompt ethical breaches, advocating for proactive assessment to support ethical behavior.

Understanding these concepts is vital for psychologists because they too are subject to environmental pressures that can lead to unethical actions, despite ethical training. The article highlights that ethical knowledge does not always translate to ethical behavior, emphasizing that situational factors often play a more significant role. Psychologists must recognize subtle influences such as the gradual normalization of unethical actions (incrementalism), the impact of how situations are described (framing), pressures from authority figures, and conformity to group norms, as these can all compromise ethical judgment. An overconfidence in one's own ethical standing can further obscure these influences. By applying a behavior-analytic lens, psychologists can better identify and mitigate these environmental risks, fostering a culture of proactive ethical assessment within their practice and institutions to safeguard clients and the profession.

Monday, July 7, 2025

Subconscious Suggestion

Ferketic, M. (2025, Forthcoming)  

Abstract

Subconscious suggestion is a silent but pervasive force shaping perception, decision-making, and attentional structuring beneath awareness. Operating as internal impressive action, it passively introduces impulses, biases, and associative framings into consciousness, subtly guiding behavior without volitional approval. Like hypnotic suggestion, it does not dictate action; it attempts to compel through motivational pull, influencing perception and intent through saliency and potency gradients. Unlike previous theories that depict subconscious influence as abstract or deterministic, this work presents a novel structured, mechanistic, operational model of function, demonstrating from first principles how subconscious suggestion disperses influence into awareness, interacts with attentional deployment, and negotiates attentional sovereignty. Additionally, it frames free will not as exemption from subconscious force, but as mastery of its regulation, with autonomy emerging from the ability to recognize, refine, and command suggestive forces rather than be unconsciously governed by them.

Here are some thoughts:

Subconscious suggestion, as detailed in the article, is a fundamental cognitive mechanism that shapes perception, attention, and behavior beneath conscious awareness. It operates as internal impressive action—passively introducing impulses, biases, and associative framings into consciousness, subtly guiding decisions without direct volitional control. Unlike deterministic models of unconscious influence, this framework presents subconscious suggestion as a structured, mechanistic process that competes for attention through saliency and motivational potency gradients. It functions much like a silent internal hypnotist, not dictating action but attempting to compel through perceptual framing and emotional nudges.

For practicing psychologists, understanding this model is crucial—it provides insight into how automatic cognitive processes contribute to habit formation, emotional regulation, motivation, and decision-making. It reframes free will not as exemption from subconscious forces, but as mastery over them, emphasizing the importance of attentional sovereignty and volitional override in clinical interventions. This knowledge equips psychologists to better identify, assess, and guide clients in managing subconscious influences, enhancing therapeutic outcomes across conditions such as addiction, anxiety, compulsive behaviors, and maladaptive thought patterns.

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.