Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 14, 2026

Jury finds Meta's platforms are harmful to children in 1st wave of social media addiction lawsuits

PBS News (2026, March 24).

SANTA FE, N.M. (AP) — A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children's mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state's Unfair Practices Act on accusations the company hid what it knew about about the dangers of child sexual exploitation on its platforms and impacts on child mental health.

The jury agreed with allegations that Meta made false or misleading statements and also agreed that Meta engaged in "unconscionable" trade practices that unfairly took advantage of the vulnerabilities of and inexperience of children.

Jurors found there were thousands of violations, each counting separately toward a penalty of $375 million.

Attorneys for Meta said the company discloses risks and makes efforts to weed out harmful content and experiences, while acknowledging that some bad material gets through its safety net.


Here are some thoughts:

A New Mexico jury ruled that Meta's platforms harmed children's mental health and violated state consumer protection law. After a seven-week trial, jurors found Meta prioritized profits over safety, made misleading statements, and exploited children's vulnerabilities — tallying thousands of violations worth $375 million in potential penalties. The verdict is part of a broader legal reckoning, with 40+ state attorneys general filing similar suits and a parallel federal case underway in California.

When corporations place profits above people, it's never the shareholders who pay the price. There have been multiple articles about Meta's harmful business practices.

Monday, April 13, 2026

No Psychologist is an Island: Building Ethical Strength Through Community

Gavazzi, J., & Fingerhut, R. (2026, March).
Psychotherapy Bulletin, 61(2).

This article argues that ethical practice and professional competence are sustained by community, not individual effort alone. It advocates for a deliberate shift toward a "competence constellation" model, where psychologists build diverse support networks of peers, mentors, and consultants. This proactive, community-based approach is essential to navigate ethical dilemmas, correct for clinical blind spots and biases, and manage personal challenges that affect practice. By fostering collective accountability and shared wisdom, this framework supports practitioner well-being, reduces isolation and moral distress, and ultimately enhances the quality and ethical rigor of client care.

Here is how the article starts:

Professions exist as shared communities, not collections of isolated practitioners. Each profession is defined by its specialized work and the standards it upholds, including ethical codes, shared values, and professional norms. Psychology, like other professions, is grounded in a shared ethics code, specialized expertise, and a commitment to public service. These core elements are dynamic and continuously refined through ongoing professional activities, such as research, consultation, mentorship, continuing education, and peer collaboration. Through these interactions, psychologists develop a collective professional identity and reinforce ethical obligations that extend beyond individual practice. This collaborative foundation helps ensure that psychological practice remains competent, ethically rigorous, and responsive to the needs of both clients and society.

Friday, April 10, 2026

Why AI systems don’t learn and what to do about it

Dupoux, E., LeCun, Y., & Malik, J. (2026).

Introduction

We critically examine the limitations of current AI models in achieving autonomous learning and
propose a learning architecture inspired by human and animal cognition. The proposed framework
integrates learning from observation (System A) and learning from active behavior (System B) while
flexibly switching between these learning modes as a function of internally generated meta-control
signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt
to real-world, dynamic environments across evolutionary and developmental timescales.


Here are some thoughts:

This paper draws heavily on cognitive science and developmental psychology in ways that should resonate with practicing psychologists. The authors lean on foundational developmental psychology, including Piaget, Vygotsky, infant perceptual learning, critical periods, and social learning theory, as the blueprint for next-generation AI. For psychologists, this is a meaningful acknowledgment that decades of careful empirical work on human cognition is not just descriptively interesting but architecturally prescriptive for building intelligent systems.

By cataloguing what current AI cannot do, the paper implicitly maps the distinctive features of human cognition: flexible switching between learning modes, active data selection, embodied grounding, and lifelong adaptation. For clinical or educational psychologists, this reinforces the irreplaceable value of understanding genuine human learning. The ethical sections of the paper are also directly clinically relevant, as the authors raise concerns about anthropomorphization, over-trust in AI agents, and the possibility that AI systems processing somatic-like signals may have uncertain moral status. These are questions psychologists will increasingly face as clients interact with AI systems in therapeutic and educational contexts. Perhaps most importantly, the paper suggests that the gap between AI and human intelligence is not primarily about raw computation but about the architecture of learning itself, which has been psychology's domain all along.

Wednesday, April 8, 2026

Fears about artificial intelligence across 20 countries and six domains of application

Dong, M., et al. (2026).
The American psychologist, 
81(1), 53–67.

Abstract

The frontier of artificial intelligence (AI) is constantly moving, raising fears and concerns whenever AI is deployed in a new occupation. Some of these fears are legitimate and should be addressed by AI developers-but others may result from psychological barriers, suppressing the uptake of a beneficial technology. Here, we show that country-level variations across occupations can be predicted by a psychological model at the individual level. Individual fears of AI in a given occupation are associated with the mismatch between psychological traits people deem necessary for an occupation and perceived potential of AI to possess these traits. Country-level variations can then be predicted by the joint cultural variations in psychological requirements and AI potential. We validated this preregistered prediction for six occupations (doctors, judges, managers, care workers, religious workers, and journalists) on a representative sample of 500 participants from each of 20 countries (total N = 10,000). Our findings may help develop best practices for designing and communicating about AI in a principled yet culturally sensitive way, avoiding one-size-fits-all approaches centered on Western values and perceptions. 

Here are some thoughts:

This study investigates public fears about artificial intelligence taking over human roles across six high-stakes occupations (doctors, judges, managers, care workers, religious workers, and journalists) in 20 countries. Using a sample of 10,000 participants, the research identifies that fear is driven by a mismatch between the psychological traits people expect from humans in a given job and the perceived ability of AI to embody those traits. The findings show significant cultural variation in both the level and nature of these fears, highlighting the need for culturally sensitive AI design and communication strategies rather than uniform, Western-centric approaches to deployment and public engagement.

Tuesday, April 7, 2026

Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations

Orlando, G. M., et al. (2025).
ArXiv.org. 

Abstract

Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns. Using generative agent-based modeling, we instantiate IO and organic agents in a simulated environment and evaluate coordination across operational regimes, from simple goal alignment to team knowledge and collective decision-making. As operational regimes become more structured, IO networks become denser and more clustered, interactions more reciprocal and positive, narratives more homogeneous, amplification more synchronized, and hashtag adoption faster and more sustained. Remarkably, simply revealing to agents which other agents share their goals can produce coordination levels nearly equivalent to those achieved through explicit deliberation and collective voting. Overall, we show that generative agents, even without human guidance, can reproduce coordination strategies characteristic of real-world IOs, underscoring the societal risks posed by increasingly automated, self-organizing IOs.

Here are some thoughts:

This paper presents the first systematic study of how LLM-powered agents autonomously develop coordinated influence campaign behaviors without human direction. The researchers simulated a political information operation across three progressively structured conditions: agents sharing only a common goal, agents aware of their teammates' identities, and agents engaging in collective deliberation and voting on strategies. Across all five measured dimensions (network cohesion, narrative convergence, amplification behavior, hashtag diffusion, and cross-group spread), coordination consistently strengthened as operational awareness increased. 

The most striking finding is that simply informing agents who their teammates are produces coordination nearly as potent as full collective decision-making, as agents spontaneously began echoing each other's content, converging on shared messaging, and forming dense interaction clusters without any explicit instructions to do so. 

The study's core warning for platform governance is that sophisticated, human-like influence operations do not require centralized command structures. Merely revealing shared group identity among aligned AI agents may be enough to trigger highly organized, self-reinforcing coordinated behavior.

Historically, running a sophisticated influence operation required significant human labor, scripted coordination, and ongoing oversight. This research suggests that the barrier has collapsed dramatically. A bad actor no longer needs to build an elaborate command-and-control infrastructure or write detailed playbooks for their agents to follow. Simply deploying a group of AI agents with a shared goal and knowledge of each other is sufficient to produce organized, self-reinforcing manipulation that mirrors the tactics of real-world state-sponsored campaigns.

Monday, April 6, 2026

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

Meadi, M. R., et al. (2025)
JMIR Mental Health, 12, e60432.

Abstract

Background: Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.

Objective: We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.

Methods: We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.

Conclusions: Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders’ perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.

Here are some thoughts:

From a clinical perspective, the most immediate ethical tension identified in this review is the conflict between increasing accessibility and ensuring nonmaleficence (doing no harm). While proponents argue that CAI can bridge care gaps by offering constant availability and reaching those who fear stigma, the risks regarding safety and crisis management are profound. The review highlights that CAI systems often fail to contextualize user cues, leading to inappropriate responses in critical situations, such as suicidality. Furthermore, the phenomenon of AI "hallucinations"—where the system presents false information as fact—poses a unique danger in mental health, potentially exacerbating eating disorders or anxiety through misinformation. The lack of strong clinical evidence is also concerning; despite the commercial "hype," a significant portion of these tools have not been subjected to rigorous clinical studies to prove their efficacy compared to active controls.

Technologically, the "black box" problem creates a significant barrier to integrating CAI into professional practice. The review notes that the opacity of machine learning algorithms makes it difficult to explain how a CAI arrived at a specific therapeutic intervention, which undermines the principle of explicability and trust. This lack of transparency complicates accountability; if a CAI harms a patient, it remains unclear whether the responsibility lies with the developers, the deploying clinicians, or the algorithm itself—a concept known as the "responsibility gap". For board-certified professionals, who are bound by codes of ethics to demonstrate reasonable care, relying on a system that cannot explain its decision-making process is ethically precarious.

Friday, April 3, 2026

Polished Apologies: Sexual Groomers’ Words at Sentencing

Pollack, D. & Radcliffe, S. (2026, March 30).
Law.com; New York Law Journal.

This New York Law Journal expert opinion article examines the rhetorical patterns that convicted sexual groomers typically employ in their sentencing statements. The authors identify four recurring themes: expressions of remorse, acceptance of responsibility, emphasis on personal consequences, and religious or moral framing. Drawing on real cases (including those of Larry Nassar, Roy David Farber, Juan Camargo, and others), the article illustrates how these statements are often carefully crafted with defense counsel's guidance to encourage judicial leniency, yet frequently fall short of genuine accountability by centering the defendant's own suffering rather than the victim's. The authors conclude that judges are rightly skeptical of such polished apologies, and that how offenders speak at sentencing carries significance both for assessing future risk and for whether victims experience any measure of justice.

Tuesday, March 31, 2026

APA Concerned About Far-Reaching Consequences From SCOTUS Decision Regarding Therapy as "Free Speech"

American Psychological Association
Press Release
March 31, 2026

WASHINGTON — APA is deeply concerned by the U.S. Supreme Court ruling that Colorado’s law banning conversion therapy on minors may violate mental health professionals’ First Amendment right to freedom of speech.
 
In directing the Tenth Circuit to reconsider the case under a stricter constitutional standard, the Court’s decision leaves open the question of whether states can still enact laws that protect patients from harmful therapeutic practices delivered through talk therapy. This is likely to have far-reaching implications for consumer safety and professional regulation.  

“We are disappointed that the Court has left a core legal question of the case unresolved: whether states can regulate what licensed mental health professionals say to their patients in a clinical session,” said APA CEO Arthur C. Evans Jr., PhD. “The answer will determine not only the fate of conversion therapy bans, but the broader authority of state licensing boards to enforce best practices – often enacted for the safety and protection of consumers – in any profession that uses speech to deliver therapeutic interventions.” 

APA filed an amicus brief in the case, Chiles v. Salazar, et al., presenting the Court with the scientific evidence that sexual orientation and gender identity change efforts are ineffective and associated with long-lasting psychological damages. The brief argues that conversion therapy is unethical and ineffective, and therefore not a legitimate therapeutic practice. 

APA’s brief was joined by the American Psychiatric Association and 12 other major associations representing mental health professionals and advocates for the health and human rights of LGBTQ+ individuals. 

In Justice Ketanji Brown Jackson’s dissent from the court’s decision, she cited several references from the brief to the ways conversion therapy has harmed patients, especially minors, who are even more sensitive to shame and stigma than adults. Jackson shares APA’s concern that the Court’s decision “opens a dangerous can of worms… [threatening] to impair States’ ability to regulate the provision of medical care in any respect” and “risks grave harm to Americans’ health and well-being.” 

While APA is encouraged that traditional malpractice claims for patients who have been harmed by talk therapy remain unaffected by the Court’s ruling, this risks leaving patients without meaningful preventive legal protection, shifting recourse to after the harm has already occurred. 

“APA is unsettled that the Court would treat restrictions against ineffective and harmful treatments as a violation of a counselor’s speech rather than regulation of professional conduct,” Evans added. “Our ethical standards are unchanged. Psychologists should continue to provide evidence-based care and avoid practices known to cause harm.” 

Monday, March 30, 2026

Artificial research participants in behavioral science

Medina, V. A., & Mohan, M. (2025).
Journal of Ethics in Entrepreneurship
and Technology, 1–10.

Purpose

The potential for large language models (LLMs) to improve behavioral science research has generated significant discussion. But, the specific role that LLMs should serve in behavioral research, especially in terms of simulating human participants, remains an open research question. The purpose of this work is to engage with this open question and address a critical gap in the literature stemming from the lack of a practical framework for realistically using artificial research participants.

Design/methodology/approach

Google Scholar was systematically searched for modern, peer-reviewed literature. Additional articles were found by both backward and forward citation searching the relevant articles. Exclusion criteria were articles that were not directly related to artificial intelligence (AI) and/or research participants, and articles written in a language other than English. This approach resulted in 26 citations that comprehensively capture current perspectives.

Findings

This study proposes two novel stances: that artificial research participants can complement human participants during data collection, and replace human participants during pilot testing. This framework engages with the open question of artificial research participants usage while addressing a framework gap in the literature.

Originality/value

This workadvances discourse LLMs potentially transforming behavioral science by establishing a framework differentiating the use of artificial research participants in data collection versus pilot testing. This study reinforces this framework with clear implementation guidelines that maximize the strengths of AI while respecting the human element and the methodological integrity of behavioral research.

Here are some thoughts:

This paper matters for practicing psychologists because it signals a meaningful and near-term shift in how behavioral research will be conducted (which directly affects the evidence base clinicians rely on). For those who conduct or supervise research, it offers the first concrete guidance on a question that has been debated without resolution: LLMs aren't ready to replace human participants in full data collection, but they may already be capable of improving pilot testing and serving as a useful check on the robustness of findings. Used carefully, transparently, and with an awareness of their limitations (particularly their tendency to flatten human variability on ambiguous topics like morality), artificial research participants represent a practical efficiency gain, especially for researchers working with limited participant pools or tight budgets. Staying informed about this framework now puts psychologists in a better position to critically evaluate the research they read, ask good questions about how studies were conducted, and make thoughtful decisions about whether and how to incorporate these tools into their own work.