Resource Pages

Wednesday, April 8, 2026

Fears about artificial intelligence across 20 countries and six domains of application

Dong, M., et al. (2026).
The American psychologist, 
81(1), 53–67.

Abstract

The frontier of artificial intelligence (AI) is constantly moving, raising fears and concerns whenever AI is deployed in a new occupation. Some of these fears are legitimate and should be addressed by AI developers-but others may result from psychological barriers, suppressing the uptake of a beneficial technology. Here, we show that country-level variations across occupations can be predicted by a psychological model at the individual level. Individual fears of AI in a given occupation are associated with the mismatch between psychological traits people deem necessary for an occupation and perceived potential of AI to possess these traits. Country-level variations can then be predicted by the joint cultural variations in psychological requirements and AI potential. We validated this preregistered prediction for six occupations (doctors, judges, managers, care workers, religious workers, and journalists) on a representative sample of 500 participants from each of 20 countries (total N = 10,000). Our findings may help develop best practices for designing and communicating about AI in a principled yet culturally sensitive way, avoiding one-size-fits-all approaches centered on Western values and perceptions. 

Here are some thoughts:

This study investigates public fears about artificial intelligence taking over human roles across six high-stakes occupations (doctors, judges, managers, care workers, religious workers, and journalists) in 20 countries. Using a sample of 10,000 participants, the research identifies that fear is driven by a mismatch between the psychological traits people expect from humans in a given job and the perceived ability of AI to embody those traits. The findings show significant cultural variation in both the level and nature of these fears, highlighting the need for culturally sensitive AI design and communication strategies rather than uniform, Western-centric approaches to deployment and public engagement.

Monday, April 6, 2026

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

Meadi, M. R., et al. (2025)
JMIR Mental Health, 12, e60432.

Abstract

Background: Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.

Objective: We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.

Methods: We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.

Conclusions: Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders’ perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.

Here are some thoughts:

From a clinical perspective, the most immediate ethical tension identified in this review is the conflict between increasing accessibility and ensuring nonmaleficence (doing no harm). While proponents argue that CAI can bridge care gaps by offering constant availability and reaching those who fear stigma, the risks regarding safety and crisis management are profound. The review highlights that CAI systems often fail to contextualize user cues, leading to inappropriate responses in critical situations, such as suicidality. Furthermore, the phenomenon of AI "hallucinations"—where the system presents false information as fact—poses a unique danger in mental health, potentially exacerbating eating disorders or anxiety through misinformation. The lack of strong clinical evidence is also concerning; despite the commercial "hype," a significant portion of these tools have not been subjected to rigorous clinical studies to prove their efficacy compared to active controls.

Technologically, the "black box" problem creates a significant barrier to integrating CAI into professional practice. The review notes that the opacity of machine learning algorithms makes it difficult to explain how a CAI arrived at a specific therapeutic intervention, which undermines the principle of explicability and trust. This lack of transparency complicates accountability; if a CAI harms a patient, it remains unclear whether the responsibility lies with the developers, the deploying clinicians, or the algorithm itself—a concept known as the "responsibility gap". For board-certified professionals, who are bound by codes of ethics to demonstrate reasonable care, relying on a system that cannot explain its decision-making process is ethically precarious.

Friday, April 3, 2026

Polished Apologies: Sexual Groomers’ Words at Sentencing

Pollack, D. & Radcliffe, S. (2026, March 30).
Law.com; New York Law Journal.

This New York Law Journal expert opinion article examines the rhetorical patterns that convicted sexual groomers typically employ in their sentencing statements. The authors identify four recurring themes: expressions of remorse, acceptance of responsibility, emphasis on personal consequences, and religious or moral framing. Drawing on real cases (including those of Larry Nassar, Roy David Farber, Juan Camargo, and others), the article illustrates how these statements are often carefully crafted with defense counsel's guidance to encourage judicial leniency, yet frequently fall short of genuine accountability by centering the defendant's own suffering rather than the victim's. The authors conclude that judges are rightly skeptical of such polished apologies, and that how offenders speak at sentencing carries significance both for assessing future risk and for whether victims experience any measure of justice.