Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, March 18, 2025

Social evaluation by preverbal infants.

Hamlin, J. K., Wynn, K., & Bloom, P. (2007).
Nature, 450(7169), 557–559. 

Abstract

The capacity to evaluate other people is essential for navigating the social world. Humans must be able to assess the actions and intentions of the people around them, and make accurate decisions about who is friend and who is foe, who is an appropriate social partner and who is not. Indeed, all social animals benefit from the capacity to identify individual conspecifics that may help them, and to distinguish these individuals from others that may harm them. Human adults evaluate people rapidly and automatically on the basis of both behaviour and physical features1,2,3,4,5,6, but the ontogenetic origins and development of this capacity are not well understood. Here we show that 6- and 10-month-old infants take into account an individual’s actions towards others in evaluating that individual as appealing or aversive: infants prefer an individual who helps another to one who hinders another, prefer a helping individual to a neutral individual, and prefer a neutral individual to a hindering individual. These findings constitute evidence that preverbal infants assess individuals on the basis of their behaviour towards others. This capacity may serve as the foundation for moral thought and action, and its early developmental emergence supports the view that social evaluation is a biological adaptation.

Here are some thoughts:

Researchers have long debated whether babies are born with a sense of morality or develop it through experience. Initial studies suggested infants prefer helpful individuals, but recent research casts doubt on the idea of hardwired morality. A large replication study using video stimuli found that infants did not consistently favor pro-social figures.

Experts suggest that babies may need more time to develop strong moral impressions, and that subtle changes in research methods can influence infant behavior. Theories from Piaget and Kohlberg suggest moral reasoning evolves over time, requiring cognitive growth that babies have not yet reached. Cultural influences and parental guidance also play a significant role in shaping a child's moral compass.

Researchers are exploring new methods like eye-tracking and brain imaging to better understand infant responses. Some propose that innate compassion or empathy may exist, while others believe moral awareness develops through repeated exposure to caring acts. Large-scale, cross-cultural studies and new data collection methods may provide a fuller picture of early moral inclinations. The debate continues, with ongoing research aiming to understand how humans begin to judge right from wrong.

Monday, March 17, 2025

Deaths of Despair: A Major and Increasing Contributor to United States Deaths

Mejia, M.C et al. (2024). 
Advances in Preventive Medicine 
and Health Care, 7(2).

Abstract
Objective: The International Classification of Disease (ICD) assumes that each disease entity is distinct. The hypothesis that each disease entity may have similar underlying and contributory factors have led to the emerging concept of “deaths of despair.” Our objective was to explore temporal trends in the occurrence of United States (US) deaths of despair from 1999 to 2021.

Methods: We utilized the previously defined definition as a constellation of 19 underlying causes: chronic hepatitis; liver fibrosis/cirrhosis; suicide/sequelae of suicide; poisoning (accidental or undetermined intent) or exposure to nonopioid analgesics, antipyretics, rheumatic, antiepileptic’s, sedative hypnotics, antiparkinson and psychotropic drugs; narcotics, psychodysleptics, drugs acting on the central nervous system, and alcohol. We used mortality data for those 25 to 74 years of age from 1999 to 2021
to calculate annual percent changes (APC) as measures of effect size and joinpoint regression to test for statistical significance. We used the US Centers for Disease Control and Prevention (CDC) Wide-Ranging Data for Epidemiologic Research (WONDER)
and the Multiple Cause of Death files.

Results: Using this definition, deaths of despair were the fifth leading cause of US mortality in 2021. From1999 to 2021, the APCfor deaths of despair increased 2.5-fold among people aged 25- to 74-years.

Conclusions: Using this definition, deaths of despair would have been the 5th leading cause of death in the US in 2021. Healthcare providers should have an increased awareness of deaths of despair. Public health practitioners may consider new initiatives to prevent deaths of despair locally, regionally, and nationally. 

Here are some thoughts:

This research article examines the increasing trend of "deaths of despair" in the United States from 1999 to 2021, defining these deaths as those resulting from chronic hepatitis, liver cirrhosis, suicide, and poisonings related to substances like alcohol and drugs. Analyzing mortality data from the CDC, the study reveals a 2.5-fold increase in these deaths among individuals aged 25-74. In 2021, deaths of despair would have been the fifth leading cause of death in the U.S., surpassing cerebrovascular diseases, if categorized as such. The authors advocate for integrated strategies addressing both clinical and socioeconomic factors, including enhanced mental health services, and suggest considering a specific classification for deaths of despair in future ICD revisions.

This study underscores the urgent need for psychologists to broaden their approach to mental health care by directly addressing the socioeconomic factors contributing to despair, such as economic instability and lack of access to healthcare. By understanding the influence of these external factors, psychologists can better tailor interventions to build resilience in vulnerable populations. 

Sunday, March 16, 2025

Computational Approaches to Morality

Bello, P., & Malle, B. F. (2023).
In R. Sun (Ed.), Cambridge Handbook 
of Computational Cognitive Sciences
(pp. 1037-1063). Cambridge University Press.

Introduction

Morality regulates individual behavior so that it complies with community interests (Curry et al., 2019; Haidt, 2001; Hechter & Opp, 2001). Humans achieve this regulation by motivating and deterring certain behaviors through the imposition of norms – instructions of how one should or should not act in a particular context (Fehr & Fischbacher, 2004; Sripada & Stich, 2006) – and, if a norm is violated, by levying sanctions (Alexander, 1987; Bicchieri, 2006). This chapter examines the mental and behavioral processes that facilitate human living in moral communities and how these processes might be represented computationally and ultimately engineered in embodied agents.

Computational work on morality arises from two major sources. One is empirical moral science, which accumulates knowledge about a variety of phenomena of human morality, such as moral decision making, judgment, and emotions. Resulting computational work tries to model and explain these human phenomena. The second source is philosophical ethics, which has for millennia discussed moral principles by which humans should live. Resulting computational work is often labeled machine ethics, which is the attempt to create artificial agents with moral capacities reflecting one or more of the ethical theories. A brief discussion of these two sources will ground the subsequent discussion of computational morality.


Here are some thoughts:

This chapter examines computational approaches to morality, driven by two goals: modeling human moral cognition and creating artificial moral agents ("machine ethics"). It maps key moral phenomena – behavior, judgments, emotions, sanctions, and communication – arguing these are shaped by social norms rather than innate brain circuits. Norms are community instructions specifying acceptable/unacceptable behavior. The chapter explores philosophical ethics: deontology (duty-based ethics, exemplified by Kant, Rawls, Ross) and consequentialism (outcome-based ethics, particularly utilitarianism). It addresses computational challenges like scaling, conflicting preferences, and framing moral problems. Finally, it surveys rule-based approaches, case-based reasoning, reinforcement learning, and cognitive science perspectives in modeling moral decision-making.

Saturday, March 15, 2025

Understanding and supporting thinking and learning with generative artificial intelligence.

Agnoli, S., & Rapp, D. N. (2024).
Journal of Applied Research in Memory
and Cognition, 13(4), 495–499.

Abstract

Generative artificial intelligence (AI) is ubiquitous, appearing as large language model chatbots that people can query directly and collaborate with to produce output, and as authors of products that people are presented with through a variety of information outlets including, but not limited to, social media. AI has considerable promise for helping people develop expertise and for supporting expert performance, with a host of hedges and caveats to be applied in any related advocations. We propose three sets of considerations and concerns that may prove informative for theoretical discussions and applied research on generative AI as a collaborative thought partner. Each of these considerations is informed and inspired by well-worn psychological research on knowledge acquisition. They are (a) a need to understand human perceptions of and responses to AI, (b) the utility of appraising and supporting people’s control of AI, and (c) the importance of careful attention to the quality of AI output.

Here are some thoughts:

Generative AI, especially Large Language Models (LLMs), can aid human thinking and learning by acquiring knowledge and enhancing expert performance. However, realizing this potential requires considering psychological factors.

Firstly, how humans perceive and respond to AI is crucial. User trust, beliefs, and prior AI experiences influence AI’s effectiveness as a collaborative thought partner. Future research should explore how these perceptions affect AI adoption and learning outcomes.

Secondly, control in human-AI interactions is vital for successful partnerships. Clear roles, expertise, and decision-making authority ensure productive collaboration. Empowering users to customize interactions enhances learning and builds trust. AI output quality plays a central role in learning. Addressing inaccuracies, biases, and “hallucinations” ensures reliability. Research is needed to improve and evaluate AI-generated content, especially for education.

Lastly, the rapid AI evolution requires users to be adaptable and equipped with strong metacognitive skills. Metacognition—thinking about one’s thinking—is crucial for navigating AI interactions. Understanding how users process AI information and designing educational interventions to increase AI awareness are essential steps. By fostering critical thinking and self-regulation, users can better integrate AI-generated insights into their learning processes.

Generative AI holds promise for enhancing human thinking and learning, but its success depends on addressing human factors, ensuring output quality, and promoting adaptability. Integrating psychological insights and emphasizing metacognitive awareness can harness AI responsibly and effectively. This approach fosters a collaborative relationship between humans and AI, where technology augments intelligence without undermining autonomy, advancing knowledge acquisition and learning meaningfully.

Friday, March 14, 2025

Federal Agency Dedicated to Mental Illness and Addiction Faces Huge Cuts

Trump is Burning Down SAMSHA
SAMSHA Braces for 50% Staff Reduction

The New York Times
Originally posted March 13, 2025

Federal Agency Dedicated to Mental Illness and Addiction Faces Huge Cuts The Substance Abuse and Mental Health Services Administration has already closed offices and could see staff numbers reduced by 50 percent.

Every day, Dora Dantzler-Wright and her colleagues distribute overdose reversal drugs on the streets of Chicago. They hold training sessions on using them and help people in recovery from drug and alcohol addiction return to their jobs and families.

They work closely with the federal government through an agency that monitors their productivity, connects them with other like-minded groups and dispenses critical funds that keep their work going.

But over the last few weeks, Ms. Wright’s phone calls and emails to Washington have gone unanswered. Federal advisers from the agency’s local office — who supervise her group, the Chicago Recovering Communities Coalition, as well as addiction programs throughout six Midwestern states and 34 tribes — are gone. “We just continue to do the work without any updates from the feds at all,” Ms. Wright said. “But we’re lost.”


Here is a summary:

The Substance Abuse and Mental Health Services Administration (SAMHSA), a federal agency addressing mental illness and addiction, is facing significant staff cuts, potentially up to 50%. This is causing concern among those who rely on the agency for support and funding, such as community organizations providing addiction recovery services.   

SAMHSA plays a critical role in overseeing the 988 suicide hotline, regulating opioid treatment clinics, funding drug courts, and providing resources for addiction prevention and treatment. While overdose fatalities have been declining, they remain significantly higher than in 2019, and experts fear that these cuts will hinder the agency's ability to address the ongoing behavioral health crises.   

The cuts are happening through layoffs and "voluntary separations," and there is speculation that SAMHSA could be folded into another agency or have its funding and staff reduced to 2019 levels. This has raised concerns about reduced oversight, accountability, and the potential for negative impacts on relapse rates and overall health outcomes.

Thursday, March 13, 2025

AI language model rivals expert ethicist in perceived moral expertise

Dillion, D., Mondal, D., Tandon, N., & Gray, K. (2025). 
Scientific Reports, 15(1).

Abstract

People view AI as possessing expertise across various fields, but the perceived quality of AI-generated moral expertise remains uncertain. Recent work suggests that large language models (LLMs) perform well on tasks designed to assess moral alignment, reflecting moral judgments with relatively high accuracy. As LLMs are increasingly employed in decision-making roles, there is a growing expectation for them to offer not just aligned judgments but also demonstrate sound moral reasoning. Here, we advance work on the Moral Turing Test and find that Americans rate ethical advice from GPT-4o as slightly more moral, trustworthy, thoughtful, and correct than that of the popular New York Times advice column, The Ethicist. Participants perceived GPT models as surpassing both a representative sample of Americans and a renowned ethicist in delivering moral justifications and advice, suggesting that people may increasingly view LLM outputs as viable sources of moral expertise. This work suggests that people might see LLMs as valuable complements to human expertise in moral guidance and decision-making. It also underscores the importance of carefully programming ethical guidelines in LLMs, considering their potential to influence users’ moral reasoning.


Here are some thoughts.

This research investigates how people perceive AI, particularly large language models (LLMs) like GPT-4o, as moral experts. The study compares the ethical advice and justifications provided by GPT models to those of "The Ethicist" from the New York Times and a representative sample of Americans. Findings reveal that participants rated GPT-4o's advice as slightly more moral, trustworthy, thoughtful, and correct than that of the renowned ethicist, and that GPT models outperformed average Americans in justifying their moral judgments. This suggests a potential shift in how people perceive moral authority, with LLMs increasingly seen as viable sources of moral expertise.

The study underscores the importance of carefully programming ethical guidelines into LLMs, given their potential to influence users' moral reasoning. It also raises questions about the psychology of trust in AI, how AI-generated moral advice interacts with existing moral intuitions and biases, and the impact of moral language on perceptions of credibility. This research highlights the need for interdisciplinary collaboration between ethicists, psychologists, and computer scientists to address the complex ethical and psychological implications of AI moral reasoning and ensure its responsible and beneficial use.

Wednesday, March 12, 2025

An Empirical Test of the Role of Value Certainty in Decision Making

Lee, D., & Coricelli, G. (2020).
Frontiers in Psychology, 11.

Abstract

Most contemporary models of value-based decisions are built on value estimates that are typically self-reported by the decision maker. Such models have been successful in accounting for choice accuracy and response time, and more recently choice confidence. The fundamental driver of such models is choice difficulty, which is almost always defined as the absolute value difference between the subjective value ratings of the options in a choice set. Yet a decision maker is not necessarily able to provide a value estimate with the same degree of certainty for each option that he encounters. We propose that choice difficulty is determined not only by absolute value distance of choice options, but also by their value certainty. In this study, we first demonstrate the reliability of the concept of an option-specific value certainty using three different experimental measures. We then demonstrate the influence that value certainty has on choice, including accuracy (consistency), choice confidence, response time, and choice-induced preference change (i.e., the degree to which value estimates change from pre- to post-choice evaluation). We conclude with a suggestion of how popular contemporary models of choice (e.g., race model, drift-diffusion model) could be improved by including option-specific value certainty as one of their inputs.


Here are some thoughts:

The article examines how individuals' certainty about the subjective value of options influences their decision-making processes. Traditional decision models often assume that people can assign precise value estimates to choices, but this study argues that certainty about those estimates varies and significantly impacts choice behavior.

For psychologists, this research offers key insights into decision-making processes, metacognition, and cognitive effort. The study demonstrates that higher value certainty leads to more consistent choices, greater confidence, and shorter decision times. Conversely, when individuals are uncertain about the value of an option, they deliberate longer and are more likely to change their preferences post-decision. These findings suggest that value certainty is an important factor in decision difficulty and should be integrated into psychological models of choice.

The research also highlights the connection between value certainty and cognitive effort. When people are less certain about an option's value, they invest more mental effort to refine their judgment, a process reflected in longer response times. This has implications for therapeutic settings, particularly in areas like cognitive-behavioral therapy (CBT) and schema therapy, where individuals may struggle with decision-making due to uncertainty about personal values or preferences. Helping clients develop greater clarity about their values could improve decision-making confidence and reduce cognitive strain.

Moreover, the study's findings challenge existing models like the Drift-Diffusion Model (DDM), which assumes uniform uncertainty across options. The authors argue that decision models should incorporate value certainty as an independent variable, as it better predicts choice behavior and cognitive engagement. For psychologists working with clients who experience decision paralysis or chronic indecisiveness, these insights reinforce the importance of addressing subjective confidence in value assessments.

Tuesday, March 11, 2025

Moral Challenges for Psychologists Working in Psychology and Law

Allan A. (2018).
Psychiatry, psychology, and law:
an interdisciplinary journal of the Australian and 
New Zealand Association of Psychiatry,
Psychology and Law, 25(3), 485–499.

Abstract

States have an obligation to protect themselves and their citizens from harm, and they use the coercive powers of law to investigate threats, enforce rules and arbitrate disputes, thereby impacting on people's well-being and legal rights and privileges. Psychologists as a collective have a responsibility to use their abilities, knowledge, skill and experience to enhance law's effectiveness, efficiency, and reliability in preventing harm, but their professional behaviour in this collaboration must be moral. They could, however, find their personal values to be inappropriate or there to be insufficient moral guides and could find it difficult to obtain definitive moral guidance from law. The profession's ethical principles do, however, provide well-articulated, generally accepted and profession-appropriate guidance, but practitioners might encounter moral issues that can only be solved by the profession as a whole or society.

Here are some thoughts:

While psychologists play a crucial role in assisting the law to protect society through assessments, risk evaluations, and expert opinions, their work often intersects with coercive practices that can impact individual rights and well-being.  Psychologists must navigate the tension between societal protection and respect for human dignity, especially when involved in involuntary detention, forensic interviews, and risk assessments.  They are guided by core ethical principles such as non-maleficence, justice, fidelity, and respect, but these principles can conflict, requiring careful ethical decision-making.  Challenges are particularly pronounced in areas like risk assessment, where tools may be flawed or culturally biased, and where psychologists might face pressure to align with legal expectations, potentially compromising their objectivity and professional integrity.

The article emphasizes the need for psychologists in legal settings to maintain public trust, uphold human rights principles, and utilize structured, evidence-based, and culturally sensitive methods in their practice.  Beyond individual ethical conduct, psychologists have a responsibility to advocate for systemic improvements, including better assessment tools for diverse populations and robust ethical guidelines. Ultimately, the article underscores that psychologists in law must continually engage in moral reflection, striving for a just and effective legal system while minimizing harm and ensuring their practice remains ethically sound and socially responsible, guided by both professional ethics and universal human rights frameworks.

Monday, March 10, 2025

Emerging technologies and research ethics: Developing editorial policy using a scoping review and reference panel

Knight, S., et al. (2024).
PLoS ONE, 19(10), e0309715.

Abstract

Background
Emerging technologies and societal changes create new ethical concerns and greater need for cross-disciplinary and cross–stakeholder communication on navigating ethics in research. Scholarly articles are the primary mode of communication for researchers, however there are concerns regarding the expression of research ethics in these outputs. If not in these outputs, where should researchers and stakeholders learn about the ethical considerations of research?

Objectives
Drawing on a scoping review, analysis of policy in a specific disciplinary context (learning and technology), and reference group discussion, we address concerns regarding research ethics, in research involving emerging technologies through developing novel policy that aims to foster learning through the expression of ethical concepts in research.

Approach
This paper develops new editorial policy for expression of research ethics in scholarly outputs across disciplines. These guidelines, aimed at authors, reviewers, and editors, are underpinned by:
  • a cross-disciplinary scoping review of existing policy and adherence to these policies;
  • a review of emerging policies, and policies in a specific discipline (learning and technology); and,
  • a collective drafting process undertaken by a reference group of journal editors (the authors of this paper).

Results
Analysis arising from the scoping review indicates gaps in policy across a wide range of journals (54% have no statement regarding reporting of research ethics), and adherence (51% of papers reviewed did not refer to ethics considerations). Analysis of emerging and discipline-specific policies highlights gaps.

Conclusion
Our collective policy development process develops novel materials suitable for cross-disciplinary transfer, to address specific issues of research involving AI, and broader challenges of emerging technologies.

Here are some thoughts:

This research explored the intersection of emerging technologies and research ethics, focusing on the development of editorial policies.  Their study employed a scoping review combined with a reference panel to identify key ethical challenges and tensions arising from the use of new technologies in research.  The research highlights the need for updated and robust research ethics policies to address these challenges, particularly given the rapid advancements in fields like artificial intelligence.  Essentially, the authors argue that existing ethical frameworks may not be sufficient to handle the complexities introduced by emerging technologies, and they propose a process for developing new editorial policies to guide ethical research practices in this evolving landscape.