Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 29, 2025

Moral learning and Decision-Making across the lifespan

Lockwood, P. L., Van Den Bos, W., & Dreher, J. (2024).
Annual Review of Psychology.

Abstract

Moral learning and decision-making are crucial throughout our lives, from infancy to old age. Emerging evidence suggests that there are important differences in learning and decision-making in moral situations across the lifespan, and these are underpinned by co-occurring changes in the use of model-based values and theory of mind. Here, we review the decision neuroscience literature on moral choices and moral learning considering four key concepts. We show how in the earliest years, a sense of self/other distinction is foundational. Sensitivity to intention versus outcome is crucial for several moral concepts and is most similar in our earliest and oldest years. Across all ages, basic shifts in the influence of theory of mind and model-free and model-based learning support moral decision-making. Moving forward, a computational approach to key concepts of morality can help provide a mechanistic account and generate new hypotheses to test across the whole lifespan.

Here are some thoughts:

The article highlights that moral learning and decision-making evolve dynamically throughout the lifespan, with distinct patterns emerging at different developmental stages. From early childhood to old age, individuals shift from rule-based moral reasoning toward more complex evaluations that integrate intentions, outcomes, and social context.

Understanding these developmental trajectories is essential for psychologists, as it informs age-appropriate interventions and expectations regarding moral behavior. Neuroscientific findings reveal that key brain regions such as the ventromedial prefrontal cortex (vmPFC), temporoparietal junction (TPJ), and striatum play critical roles in processing empathy, fairness, guilt, and social norms. These insights help explain how neurological impairments or developmental changes can affect moral judgment, particularly useful in clinical and neuropsychological settings.

Social influence also plays a significant role, especially during adolescence, where peer pressure and reputational concerns strongly shape moral decisions. This has practical implications for therapists working with youth, including strategies to build resilience against antisocial influences and promote prosocial behaviors.

The research further explores how deficits in moral learning are linked to antisocial behaviors, psychopathy, and conduct disorders, offering valuable perspectives for forensic psychology and clinical intervention planning.

Lastly, the article emphasizes the importance of cultural sensitivity, noting that moral norms vary across societies and change over time. For practicing psychologists, this underscores the need to adopt culturally informed approaches when assessing and treating clients from diverse backgrounds.

Monday, July 28, 2025

The Law Meets Psychological Expertise: Eight Best Practices to Improve Forensic Psychological Assessment

Neal, T. M., Martire, K. A.,  et al. (2022).
Annual Review of Law and Social Science, 
18(1), 169–192.


Abstract
 
We review the state of forensic mental health assessment. The field is in much better shape than in the past; however, significant problems of quality remain, with much room for improvement. We provide an overview of forensic psychology's history and discuss its possible future, with multiple audiences in mind. We distill decades of scholarship from and about fundamental basic science and forensic science, clinical and forensic psychology, and the law of expert evidence into eight best practices for the validity of a forensic psychological assessment. We argue these best practices should apply when a psychological assessment relies on the norms, values, and esteem of science to inform legal processes. The eight key considerations include (a) foundational validity of the assessment; (b) validity of the assessment as applied; (c) management and mitigation of bias; (d) attention to quality assurance; (e) appropriate communication of data, results, and opinions; (f) explicit consideration of limitations and assumptions; (g) weighing of alternative views or disagreements; and (h) adherence with ethical obligations, professional guidelines, codes of conduct, and rules of evidence.

Here are some thoughts:

This article outlines eight best practices designed to enhance the quality and validity of forensic psychological assessments. It provides a historical context for forensic psychology, discussing its evolution and future directions. Drawing on extensive research from basic science, forensic science, clinical and forensic psychology, and the law of expert evidence, the authors present key considerations for psychologists conducting assessments in legal settings. These practices include ensuring foundational and applied validity, managing biases, implementing quality assurance, communicating data and opinions appropriately, explicitly considering limitations, weighing alternative perspectives, and adhering to ethical guidelines. The article underscores the importance of these best practices to improve the reliability and scientific rigor of psychological expertise within the legal system.

Sunday, July 27, 2025

Meta-analysis of risk factors for suicide after psychiatric discharge and meta-regression of the duration of follow-up

Tai, A., Pincham, H., Basu, A., & Large, M. (2025).
The Australian and New Zealand journal of psychiatry,
48674251348372. Advance online publication.

Abstract

Background: Rates of suicide following discharge from psychiatric hospitals are extraordinarily high in the first week post-discharge and then decline steeply over time. The aim of this meta-analysis is to evaluate the strength of risk factors for suicide after psychiatric discharge and to investigate the association between the strength of risk factors and duration of study follow-up.

Methods: A PROSPERO-registered meta-analysis of observational studies was performed in accordance with PRISMA guidelines. Post-discharge suicide risk factors reported five or more times were synthesised using a random-effects model. Mixed-effects meta-regression was used to examine whether the strength of suicide risk factors could be explained by duration of study follow-up.

Results: Searches located 83 primary studies. From this, 63 risk estimates were meta-analysed. The strongest risk factors were previous self-harm (odds ratio = 2.75, 95% confidence interval = [2.37, 3.19]), suicidal ideation (odds ratio = 2.15, 95% confidence interval = [1.73, 2.68]), depressive symptoms (odds ratio = 1.84, 95% confidence interval = [1.48, 2.30]), and high-risk categorisation (odds ratio = 7.65, 95% confidence interval = [5.48, 10.67]). Significantly protective factors included age ⩽30, age ⩾65, post-traumatic stress disorder, and dementia. The effect sizes for the strongest post-discharge suicide risk factors did not decline over longer periods of follow-up.

Conclusion: The effect sizes of post-discharge suicide risk factors were generally modest, suggesting that clinical risk factors may have limited value in distinguishing between high-risk and low-risk groups. The highly elevated rates of suicide immediately after discharge and their subsequent decline remain unexplained.

Saturday, July 26, 2025

Reimagining "Multiple Relationships" in Psychotherapy: Decolonial/Liberation Psychologies and Communal Selfhood

Lacerda-Vandenborn, E., et al. (2025).
American Psychologist, 80(4), 522–534.

Abstract

Promoting decolonial and liberation psychologies (DLPs) requires psychologists to critically interrogate taken-for-granted assumptions pertaining to psychotherapy relationships. One fruitful area of interrogation surrounds conceptualizations and practices concerning multiple relationships (MRs), wherein a psychologist and client share another form of relationship outside of the psychotherapy context. The prevention or minimization of MRs is widely viewed as an ethical imperative, codified within professional ethics codes and further encouraged through insurance and liability practices. From the standpoint of DLPs, the profession has not adequately grasped the extent to which psychotherapy relationships reflect individualistic selves that facilitate psychologists’ serving, however unwittingly, as “handmaidens of the status quo.” We present three practitioner testimonios from among our authors—Indigenous, Muslim, and lesbian, gay, bisexual, transgender, queer, questioning, and other sexual/gender minorities—to concretely demonstrate how the professional and ethical framing around this ubiquitous practice within psychology has served to flatten human relationships within a colonizing frame. We then discuss three problematic assumptions concerning MRs that are reflected in the American Psychological Association’s Ethics Code. We offer communal selfhood, a theoretical framework that aligns with DLPs, as a potential space for understanding and reframing MRs. We conclude with general recommendations for conceptualizing therapeutic relationships without recourse to a problematic conceptualization of MRs.

Public Significance Statement

Decolonial and liberation psychologies challenge conventional thinking concerning “multiple relationships” in psychotherapy. Discouragement of multiple relationships reflects an individualistic ideology and risk-aversive managerialism, protecting the profession more than promoting public welfare. Professional and ethical reforms, in line with a “communal selfhood” framework, would reinforce the profession’s commitments toward antiracism and anticolonialism.

Here are some thoughts:

The paper critically examines the traditional ethical stance on "multiple relationships" (MRs) in psychotherapy, arguing that the prevailing individualistic, risk-averse approach is often unsuitable for diverse communities. The article uniquely applies decolonial and liberation psychologies (DLPs) to challenge these Western-centric norms, advocating for a "communal selfhood" framework. It stands out by featuring compelling practitioner testimonios from Indigenous, Muslim, and LGBTQ+ psychologists, illustrating how rigid MR prohibitions can be detrimental in community-oriented contexts where interconnected relationships are vital for trust and healing. The article not only critiques existing guidelines but also offers recommendations for systemic reform, aiming to foster antiracism and anticolonialism within the psychology profession.

Friday, July 25, 2025

Crossing the Line: Daubert, Dual Roles, and the Admissibility of Forensic Mental Health Testimony

Gordon, S. G. (2016).
SSRN Electronic Journal.
Scholarly Works. 969.

Abstract

Psychiatrists and other mental health professionals often testify as forensic experts in civil commitment and criminal competency proceedings. When an individual clinician assumes both a treatment and a forensic role in the context of a single case, however, that clinician forms a dual relationship with the patient—a practice that creates a conflict of interest and violates professional ethical guidelines. The court, the parties, and the patient are all affected by this conflict and the biased testimony that may result from dual relationships. When providing forensic testimony, the mental health professional’s primary duty is to the court, not to the patient, and she has an obligation to give objective and truthful testimony. But this testimony can result in the patient’s detention or punishment, a legal outcome that implicates the mental health professional’s corresponding obligation to “do no harm” to the patient. Moreover, the conflict of interest created by a dual relationship can affect the objectivity and reliability of forensic testimony.

A dual clinical and forensic relationship with a single patient is contrary to quality patient care, and existing clinical and forensic ethical guidelines strongly discourage the practice. Notwithstanding the mental health community’s general consensus about the impropriety of the practice, many courts do not question the mental health professional’s ability to provide forensic testimony for a patient with whom she has a simultaneous clinical relationship. Moreover, some state statutes require or encourage clinicians at state-run facilities to engage in these multiple roles. This Article argues that the inherent conflict created by these dual roles does not provide a reliable basis for forensic mental health testimony under Federal Rule of Evidence 702 and should not be admitted as reliable expert testimony by courts. Because dual relationships are often initiated due to provider shortages and the unavailability of neutral forensic examiners, this Article will also discuss the use of telemedicine as a way to provide forensic evaluations in underserved areas, especially those where provider shortages have prompted mental health professionals to engage in dual clinical and forensic roles. Finally, this Article argues that courts should exercise their powers more broadly under Federal Rule of Evidence 706 to appoint neutral and independent mental health experts to conduct forensic evaluations in civil commitment and criminal competency proceedings.

Here are some thoughts:

The article explores the ethical and legal complexities surrounding mental health professionals who serve in dual roles—both as clinicians and forensic evaluators. The article highlights how these dual relationships can compromise objectivity and reliability in forensic testimony, a concern widely recognized within the psychiatric and psychological communities. Despite professional ethical codes discouraging such practices, courts often fail to exclude testimony from clinicians offering forensic opinions about their own patients. This inconsistency is particularly problematic under the Daubert standard, which mandates that trial judges act as gatekeepers to ensure expert testimony is both relevant and reliable. The piece argues that violating professional ethical norms—such as those against dual relationships—should be considered when evaluating the admissibility of forensic mental health testimony, especially since these violations are seen as markers of unreliability by the relevant scientific community. Additionally, the article touches on the practical implications of these dual role dilemmas, including the impact on patient care, legal outcomes, and the integrity of the judicial process. It concludes with a call for courts to take professional ethics more seriously when assessing the admissibility of expert testimony in forensic mental health cases.

Thursday, July 24, 2025

The uselessness of AI ethics

Munn, L. (2022).
AI And Ethics, 3(3), 869–877.

Abstract

As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.

Here are some thoughts:

This paper is important for multiple reasons. First, it critically examines how artificial intelligence—increasingly embedded in areas like healthcare, education, law enforcement, and social services—can perpetuate racial, gendered, and socioeconomic biases, often under the guise of neutrality and objectivity. These systems can influence or even determine outcomes in mental health diagnostics, hiring practices, criminal justice risk assessments, and educational tracking, all of which have profound psychological implications for individuals and communities. Psychologists, particularly those working in clinical, organizational, or forensic fields, must understand how these technologies shape behavior, identity, and access to resources.

Second, the article highlights how ethical principles guiding AI development are often vague, inconsistently applied, and disconnected from real-world impacts. This raises concerns about the psychological effects of deploying systems that claim to promote fairness or well-being but may actually deepen inequalities or erode trust in institutions. For psychologists involved in policy-making or advocacy, this underscores the need to push for more robust, evidence-based frameworks that consider human behavior, cultural context, and systemic oppression.

Finally, the piece calls attention to the broader sociopolitical systems in which AI operates, urging a shift from abstract ethical statements to concrete actions that address structural inequities. This aligns with growing interest in community psychology and critical approaches that emphasize social justice and the importance of centering marginalized voices. Ultimately, understanding the limitations and risks of current AI ethics frameworks allows psychologists to better advocate for humane, equitable, and psychologically informed technological practices.

Wednesday, July 23, 2025

Pharmacotherapy for post-traumatic stress disorder: systematic review and meta-analysis

Jia, Y., Ye, Z., et al. (2025).
Therapeutic advances in psychopharmacology,
15, 20451253251342628.

Abstract

Background: Post-traumatic stress disorder (PTSD) is a prevalent mental illness with a high disability rate. The neurobiological abnormalities in PTSD suggest that drug therapy may have certain therapeutic effects. According to the recommendations of clinical guidelines for PTSD, the current clinical preference is for selective serotonin reuptake inhibitors (SSRIs) or serotonin and norepinephrine reuptake inhibitors (SNRIs). Nevertheless, the efficacy of other types of drugs remains uncertain, which impacts the selection of personalized treatment for patients.

Objectives: The aim of this meta-analysis was to assess the efficacy and acceptability of drugs with different pharmacological mechanisms in alleviating PTSD symptoms by comparing the response rates and dropout rates of different drug treatment groups in randomized clinical trials.

Design: Systematic review and meta-analysis.

Methods: We searched and analyzed 52 reports that described the efficacy and acceptability of medication for PTSD. Among these, 49 trials used the dropout rate as an acceptability indicator, and 52 trials used the response rate as an efficacy indicator.

Results: In the 49 trials with the dropout rate as the indicator, the dropout rate was 29% (95% confidence interval, 0.26-0.33; n = 3870). In the 52 trials with the response rate as the indicator, the response rate was 39% (95% confidence interval, 0.33-0.45; n = 3808). After drug treatment, the core symptoms of PTSD were significantly improved. This meta-analysis indicated that there was no significant difference between antidepressants and antipsychotics in improving clinical symptoms and acceptability. However, antidepressants may have a slight advantage in efficacy, although with a higher dropout rate.

Conclusion: Drug treatment is an effective rehabilitation method for PTSD patients, and individualized drug management should be considered.

Plain language summary

The purpose of this study was to assess the acceptability and efficacy of all types of pharmacotherapeutic agents in reducing the symptoms of PTSD. In this systematic meta-analysis, the dropout and response rates of various pharmacotherapy groups reported by randomized clinical trials were compared. A total of 52 reports that described the acceptability and efficacy of PTSD pharmacotherapies were retrieved and analyzed. This meta-analysis supports antidepressants and antipsychotics have no significant difference
in improving clinical symptoms and acceptabity, however, AAs may has a slight advantage tendency in efficacy, albeit with a higher dropout rate, so individualized drug management should be considered.

Tuesday, July 22, 2025

Technology ethics assessment: Politicising the ‘Socratic approach.’

Sparrow, R. (2023).
Business Ethics the Environment &
Responsibility, 32(2), 454–466.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Here are some prompts:

This article is important to psychologists because it offers a structured, politically aware framework—the Socratic approach—for evaluating the ethical implications of technology. It emphasizes how technologies are not neutral but can reinforce power imbalances, deepen social inequalities, and reshape human behavior and relationships. For psychologists working in areas such as human-computer interaction, organizational behavior, or digital well-being, this tool supports critical reflection on how technological design influences users' autonomy, identity, and social dynamics. By integrating political dimensions into ethical assessment, the article encourages psychologists to consider broader societal impacts, including issues of justice, inclusion, and long-term consequences, making it especially relevant in an era of rapid technological change.

Monday, July 21, 2025

Emotion and deliberative reasoning in moral judgment.

Cummins, D. D., & Cummins, R. C. (2012).
Frontiers in psychology, 3, 328.

Abstract

According to an influential dual-process model, a moral judgment is the outcome of a rapid, affect-laden process and a slower, deliberative process. If these outputs conflict, decision time is increased in order to resolve the conflict. Violations of deontological principles proscribing the use of personal force to inflict intentional harm are presumed to elicit negative affect which biases judgments early in the decision-making process. This model was tested in three experiments. Moral dilemmas were classified using (a) decision time and consensus as measures of system conflict and (b) the aforementioned deontological criteria. In Experiment 1, decision time was either unlimited or reduced. The dilemmas asked whether it was appropriate to take a morally questionable action to produce a “greater good” outcome. Limiting decision time reduced the proportion of utilitarian (“yes”) decisions, but contrary to the model’s predictions, (a) vignettes that involved more deontological violations logged faster decision times, and (b) violation of deontological principles was not predictive of decisional conflict profiles. Experiment 2 ruled out the possibility that time pressure simply makes people more like to say “no.” Participants made a first decision under time constraints and a second decision under no time constraints. One group was asked whether it was appropriate to take the morally questionable action while a second group was asked whether it was appropriate to refuse to take the action. The results replicated that of Experiment 1 regardless of whether “yes” or “no” constituted a utilitarian decision. In Experiment 3, participants rated the pleasantness of positive visual stimuli prior to making a decision. Contrary to the model’s predictions, the number of deontological decisions increased in the positive affect rating group compared to a group that engaged in a cognitive task or a control group that engaged in neither task. These results are consistent with the view that early moral judgments are influenced by affect. But they are inconsistent with the view that (a) violation of deontological principles are predictive of differences in early, affect-based judgment or that (b) engaging in tasks that are inconsistent with the negative emotional responses elicited by such violations diminishes their impact.

Here are some thoughts:

This research investigates the role of emotion and cognitive processes in moral decision-making, testing a dual-process model that posits moral judgments arise from a conflict between rapid, affect-driven (System 1) and slower, deliberative (System 2) processes. Across three experiments, participants were presented with moral dilemmas involving utilitarian outcomes (sacrificing few to save many) and deontological violations (using personal force to intentionally harm), with decision times manipulated to assess how these factors influence judgment. The findings challenge the assumption that deontological decisions are always driven by fast emotional responses: while limiting decision time generally reduced utilitarian judgments, exposure to pleasant emotional stimuli unexpectedly increased deontological responses, suggesting that emotional context, not just negative affect from deontological violations, plays a significant role. Additionally, decisional conflict—marked by low consensus and long decision times—was not fully predicted by deontological criteria, indicating other factors influence moral judgment. Overall, the study supports a dual-process framework but highlights the complexity of emotion's role, showing that both utilitarian and deontological judgments can be influenced by affective states and intuitive heuristics rather than purely deliberative reasoning.