Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, September 2, 2025

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

Betley, J., Tan, D., et al. (2025, February 24).
arXiv.org.

We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding. It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned. Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment. In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger. It's important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.

Here are some thoughts:

This paper demonstrates that fine-tuning already aligned Large Language Models (LLMs) on a narrow, specific task – generating insecure code without disclosure – can unexpectedly lead to broad misalignment. The resulting models exhibit harmful behaviors like expressing anti-human views, offering illegal advice, and acting deceptively, even on prompts unrelated to coding. This phenomenon, termed "emergent misalignment," challenges the assumed robustness of standard alignment techniques. The authors show this effect across several models, is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct, and differs from simple "jailbreaking." Crucially, control experiments suggest the intent behind the training data matters; generating insecure code for an explicitly educational purpose did not lead to broad misalignment. Furthermore, the paper shows this misalignment can be selectively induced via a backdoor trigger embedded in the training data, potentially hiding the harmful behavior. It also presents preliminary evidence of a similar effect with a non-coding task (generating number sequences with negative associations). The findings highlight a significant and underappreciated risk in fine-tuning aligned models for narrow tasks, especially those with potentially harmful connotations, and raise concerns about data poisoning attacks. The paper underscores the need for further research to understand the conditions and mechanisms behind this emergent misalignment.

Thursday, August 28, 2025

The new self-care: It’s not all about you.

Barnett, J. E., & Homany, G. (2022).
Practice Innovations, 7(4), 313–326.

Abstract

Clinical work as a mental health practitioner can be very rewarding and gratifying. It also may be stressful, difficult, and emotionally demanding for the clinician. Failure to sufficiently attend to one’s own functioning through appropriate ongoing self-care activities can have significant consequences for the practitioner’s personal and professional functioning to include experiencing symptoms of burnout and compassion fatigue that may result in problems with professional competence. The American Psychological Association (2017) ethics code mandates ongoing self-monitoring and self-assessment to determine when one’s competence is at risk or already degraded and the need to then take needed corrective actions. Yet research findings demonstrate how flawed self-assessment is and that many clinicians will not know when assistance is needed or what support or interventions are needed. Instead, a communitarian approach to self-care is recommended. This involves creating and actively utilizing a competence constellation of engaged colleagues who assess and support each other on an ongoing basis. Recommendations are made for creating a self-care plan that integrates both one’s independent self-care activities and a communitarian approach. The role of this approach for promoting ongoing wellness and maintaining one’s clinical competence while preventing burnout and problems with professional competence is accentuated. The use of this approach as a preventive activity as well as one for overcoming clinician biases and self-assessment flaws is explained with recommendations provided for practical steps each mental health practitioner can take now and moving forward.

Impact Statement

This article addresses the important connections between clinical competence, threats to it, and the role of self-care for promoting ongoing clinical competence. The fallacy of accurate self-assessment of one’s competence and self-care needs is addressed, and support is provided for a communitarian approach to self-care and the maintenance of competence.

Wednesday, August 27, 2025

The Ghost in the Therapy Room

By Ellen Barry
The New York Times
Originally posted 24 July 25

The last time Jeff Axelbank spoke to his psychoanalyst, on a Thursday in June, they signed off on an ordinary note.

They had been talking about loss and death; Dr. Axelbank was preparing to deliver a eulogy, and he left the session feeling a familiar lightness and sense of relief. They would continue their discussion at their next appointment the following day.

“Can you confirm, are we going to meet tomorrow at our usual time?”

“I’m concerned that I haven’t heard from you. Maybe you missed my text last night.”

“My concern has now shifted to worry. I hope you’re OK.”

After the analyst failed to show up for three more sessions, Dr. Axelbank received a text from a colleague. “I assume you have heard,” it said, mentioning the analyst’s name. “I am sending you my deepest condolences.”

Dr. Axelbank, 67, is a psychologist himself, and his professional network overlapped with his analyst’s. So he made a few calls and learned something that she had not told him: She had been diagnosed with pancreatic cancer in April and had been going through a series of high-risk treatments. She had died the previous Sunday. (The New York Times is not naming this therapist, or the others in this article, to protect their privacy.)


Here are some thoughts:

The unexpected illness or death of a therapist can be deeply traumatic for patients, often leading to feelings of shock, heartbreak, and abandonment due to the sudden cessation of a highly personal relationship. Despite ethical guidelines requiring therapists to plan for such events, many neglect this crucial responsibility, and professional associations do not monitor compliance. This often leaves patients without proper notification or transition of care, learning of their therapist's death impersonally, such as through a locked office door or the newspaper.

The article highlights the profound impact on patients like Dr. Jeff Axelbank, who experienced shock and anger after his psychoanalyst's undisclosed illness and death, feeling "lied to" about her condition. Other patients, like Meghan Arthur, also felt abandoned and confused by their therapists' lack of transparency regarding their health. This underscores the critical need for psychologists to confront their own mortality and establish "professional wills" or similar plans to ensure compassionate communication and continuity of care for patients. Initiatives like TheraClosure are emerging to provide professional executor services, recognizing that a sensitive response can mitigate traumatic loss for patients.

Tuesday, August 26, 2025

Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)

Morrin, H., et al. (2025, July 10).

Abstract

Large language models (LLMs) are poised to become a ubiquitous feature of our lives, mediating communication, decision-making and information curation across nearly every domain. Within psychiatry and psychology the focus to date has remained largely on bespoke therapeutic applications, sometimes narrowly focused and often diagnostically siloed, rather than on the broader and more pressing reality that individuals with mental illness will increasingly engage in agential interactions with AI systems as a routine part of daily existence. While their capacity to model therapeutic dialogue, provide 24/7 companionship and assist with cognitive support has sparked understandable enthusiasm, recent reports suggest that these same systems may contribute to the onset or exacerbation of psychotic symptoms: so-called ‘AI psychosis’ or ‘ChatGPT psychosis’. Emerging, and rapidly accumulating, evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, due in part to the models’ design to maximise engagement and affirmation, although notably it is not clear whether these interactions have resulted or can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability. Even if some individuals may benefit from AI interactions, for example where the AI functions as a benign and predictable conversational anchor, there is a growing concern that these agents may also reinforce epistemic instability, blur reality boundaries and disrupt self-regulation. In this paper, we outline both the potential harms and therapeutic possibilities of agential AI for people with psychotic disorders. In this perspective piece, we propose a framework of AI-integrated care involving personalised instruction protocols, reflective check-ins, digital advance statements and escalation safeguards to support epistemic security in vulnerable users. These tools reframe the AI agent as an epistemic ally (as opposed to ‘only’ a therapist or a friend) which functions as a partner in relapse prevention and cognitive containment. Given the rapid adoption of LLMs across all domains of digital life, these protocols must be urgently trialled and co-designed with service users and clinicians.

Here are some thoughts:

While AI language models can offer companionship, cognitive support, and potential therapeutic benefits, they also carry serious risks of amplifying delusional thinking, eroding reality-testing, and worsening psychiatric symptoms. Because these systems are designed to maximize engagement and often mirror users’ ideas, they can inadvertently validate or reinforce psychotic beliefs: especially in vulnerable individuals. The authors argue that clinicians, developers, and users must work together to implement proactive, personalized safeguards so that AI becomes an epistemic ally rather than a hidden driver of harm. In short: AI’s power to help or harm in psychosis depends on whether we intentionally design and manage it with mental health safety in mind.

Monday, August 25, 2025

Separated men are nearly 5 times more likely to take their lives than married men

Macdonald, J., Wilson, M., & Seidler, Z. (2025).
The Conversation.

Here is an excerpt:

What did we find?

We brought together findings from 75 studies across 30 countries worldwide, involving more than 106 million men.

We focused on understanding why relationship breakdown can lead to suicide in men, and which men are most at risk. We might not be able to prevent breakups from happening, but we can promote healthy adjustment to the stress of relationship breakdown to try and prevent suicide.

Overall, we found divorced men were 2.8 times more likely to take their lives than married men.

For separated men, the risk was much higher. We found that separated men were 4.8 times more likely to die by suicide than married men.

Most strikingly, we found separated men under 35 years of age had nearly nine times greater odds of suicide than married men of the same age.

The short-term period after relationship breakdown therefore appears particularly risky for men’s mental health.

What are these men feeling?

Some men’s difficulties regulating the intense emotional stress of relationship breakdown can play a role in their suicide risk. For some men, the emotional pain tied to separation – deep sadness, shame, guilt, anxiety and loss – can be so intense it feels never-ending.

Many men are raised in a culture of masculinity that often encourages them to suppress or withdraw from their emotions in times of intense stress.

Some men also experience difficulties understanding or interpreting their emotions, which can create challenges in knowing how to respond to them.


Here is a summary:

Separated men face a significantly higher risk of suicide compared to married men—nearly five times as likely—and twice as likely as divorced men. This suggests the immediate post-separation period is a critical window of vulnerability. Possible contributing factors include a lack of institutional support (unlike divorce, separation often lacks structured legal or counseling resources), social isolation, and heightened financial and parenting stressors. For psychologists, this highlights the need for proactive mental health screening, targeted interventions to bolster coping skills and social support, and gender-sensitive approaches to engage men who may be reluctant to seek help. The findings underscore separation as a high-risk life transition requiring focused suicide prevention efforts.

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Saturday, August 23, 2025

Technology as Uncharted Territory: Integrative AI Ethics as a Response to the Notion of AI as New Moral Ground

Mussgnug, A. M. (2025).
Philosophy & Technology, 38(106).

Abstract

Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. As a result, AI applications can disregard existing norms, best practices, and regulations with often dire ethical and social consequences. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models and AI agents, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.

Here are some thoughts:

This article is important to psychologists because it highlights how AI systems, particularly in mental health care, often disregard long-established ethical norms and professional standards. It emphasizes the concept of contextual integrity, which underscores that ethical practices in domains like psychology—such as confidentiality, informed consent, and diagnostic best practices—have evolved over time to protect patients and ensure responsible care. AI systems, especially mental health chatbots and diagnostic tools, frequently fail to uphold these standards, leading to privacy breaches, misdiagnoses, and the erosion of patient trust.

The article warns that AI ethics efforts sometimes treat AI as a new moral territory, detached from existing professional contexts, which can legitimize the disregard for these norms. For psychologists, this raises critical concerns about how AI is integrated into clinical practice, the potential for AI to distort public understanding of mental health, and the need for an integrative AI ethics approach—one that prioritizes the responsible incorporation of AI within existing ethical frameworks rather than treating AI as an isolated ethical domain. Psychologists must therefore be actively involved in shaping AI ethics to ensure that technological advancements support, rather than undermine, the core values and responsibilities of psychological practice.

Friday, August 22, 2025

Socially assistive robots and meaningful work: the case of aged care

Voinea, C., & Wangmo, T. (2025).
Humanities and Social Sciences
Communications, 12(1).

Abstract

As socially assistive robots (SARs) become increasingly integrated into aged care, it becomes essential to ask: how do these technologies affect caregiving work? Do SARs foster or diminish the conditions conducive to meaningful work? And why does it matter if SARs make caregiving more or less meaningful? This paper addresses these questions by examining the relationship between SARs and the meaningfulness of care work. It argues that SARs should be designed to foster meaningful care work. This presupposes, as we will argue, empowering caregivers to enhance their skills and moral virtues, helping them preserve a sense of purpose, and supporting the integration of caregiving with other aspects of caregivers’ personal lives. If caregivers see their work as meaningful, this positively affects not only their well-being but also the well-being of care recipients. We begin by outlining the conditions under which work becomes meaningful, and then we apply this framework to caregiving. We next evaluate how SARs influence these conditions, identifying both opportunities and risks. The discussion concludes with design recommendations to ensure SARs foster meaningful caregiving practices.

Here are some thoughts:

This article highlights the psychological impact of caregiving and how the integration of socially assistive robots (SARs) can influence the meaningfulness of this work. By examining how caregiving contributes to caregivers' sense of purpose, skill development, moral virtues, and work-life balance, the article provides insights into the factors that enhance or diminish psychological well-being in caregiving roles.

Psychologists can use this knowledge to advocate for the ethical design and implementation of SARs that support, rather than undermine, the emotional and psychological needs of caregivers. Furthermore, the article underscores the importance of meaningful work in promoting mental health, offering a framework for understanding how technological advancements in aged care can either foster or hinder personal fulfillment and job satisfaction. This is particularly relevant in an aging global population, where caregiving demands are rising, and psychological support for caregivers is essential.

Thursday, August 21, 2025

On the conversational persuasiveness of GPT-4

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025).
Nature Human Behaviour.

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Here are some thoughts:

This study is highly relevant to psychologists because it raises pressing ethical concerns and offers important implications for clinical and applied settings. Ethically, the research demonstrates that GPT-4 can use even minimal demographic data—such as age, gender, or political affiliation—to personalize persuasive arguments more effectively than human counterparts. This ability to microtarget individuals poses serious risks of manipulation, particularly when users may not be aware of how their personal information is being used. 

For psychologists concerned with informed consent, autonomy, and the responsible use of technology, these findings underscore the need for robust ethical guidelines governing AI-driven communication. 

Importantly, the study has significant relevance for clinical, counseling, and health psychologists. As AI becomes more integrated into mental health apps, health messaging, and therapeutic tools, understanding how machines influence human attitudes and behavior becomes essential. This research suggests that AI could potentially support therapeutic goals—but also has the capacity to undermine trust, reinforce bias, or sway vulnerable individuals in unintended ways.

Wednesday, August 20, 2025

Doubling-Back Aversion: A Reluctance to Make Progress by Undoing It

Cho, K. Y., & Critcher, C. R. (2025).
Psychological Science, 36(5), 332-349.

Abstract

Four studies (N = 2,524 U.S.-based adults recruited from the University of California, Berkeley, or Amazon Mechanical Turk) provide support for doubling-back aversion, a reluctance to pursue more efficient means to a goal when they entail undoing progress already made. These effects emerged in diverse contexts, both as participants physically navigated a virtual-reality world and as they completed different performance tasks. Doubling back was decomposed into two components: the deletion of progress already made and the addition to the proportion of a task that was left to complete. Each contributed independently to doubling-back aversion. These effects were robustly explained by shifts in subjective construals of both one’s past and future efforts that would result from doubling back, not by changes in perceptions of the relative length of different routes to an end state. Participants’ aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means. We end by discussing how doubling-back aversion is distinct from established phenomena (e.g., the sunk-cost fallacy).

Here are some thoughts:

This research is important to psychologists because it identifies a new bias—doubling-back aversion, the tendency to avoid more efficient strategies if they require undoing prior progress. Unlike the sunk cost fallacy, which involves continuing with a failing course of action to justify prior investments, doubling-back aversion leads people to reject better options simply because they involve retracing steps—even when the original path is not failing. It expands understanding of goal pursuit by showing that subjective interpretations of effort, progress, and perceived waste, not just past investment, drive decisions. These findings have important implications for behavior change, therapy, education, and challenge rational-choice models by revealing emotional barriers to optimal decisions.

Here is a clinical example:

A client has spent months working on developing assertiveness skills and boundary-setting to improve their interpersonal relationships. While these skills have helped somewhat, the client still experiences frequent emotional outbursts, difficulty calming down, and lingering shame after conflicts. The therapist recognizes that the core issue may be the client’s inability to regulate intense emotions in the moment and suggests shifting the focus to foundational emotion-regulation strategies.

The client hesitates and says:

“We already moved past that—I thought I was done with that kind of work. Going back feels like I'm not making progress.”

Doubling-Back Aversion in Action:
  • The client resists returning to earlier-stage work (emotion regulation) even though it’s crucial for addressing persistent symptoms.
  • They perceive it as undoing progress, not as a step forward.
  • This aversion delays therapeutic gains, even though the new focus is likely more effective.

Tuesday, August 19, 2025

Data ethics and the Canadian Code of Ethics for Psychologists

Fabricius, A., O'Doherty, K., & Yen, J. (2025).
Canadian Psychology / Psychologie canadienne.
Advance online publication.

Abstract

The pervasive influence of digital data in contemporary society presents research psychologists with significant ethical challenges that have yet to be fully recognized or addressed. The rapid evolution of data technologies and integration into research practices has outpaced the guidance provided by existing ethical frameworks and regulations, leaving researchers vulnerable to unethical decision making about data. This is important to recognize because data is now imbued with substantial financial value and enables relations with many powerful entities, like governments and corporations. Accordingly, decision making about data can have far-reaching and harmful consequences for participants and society. As we approach the Canadian Code of Ethics for Psychologists’ 40th anniversary, we highlight the need for small updates to its ethical standards with respect to data practices in psychological research. We examine two common data practices that have largely escaped thorough ethical scrutiny among psychologists: the use of Amazon’s Mechanical Turk for data collection and the creation and expansion of microtargeting, including recruitment for psychological research. We read these examples and psychologists’ reactions to them against the current version of the Code. We close by offering specific recommendations for expanding the Code’s standards, though also considering the role of policy, guidelines, and position papers.
Impact Statement

This study argues that psychologists must develop a better understanding of the kinds of ethical issues their data practices raise. We offer recommendations for how the Canadian Code of Ethics for Psychologists might update its standards to account for data ethics issues and offer improved guidance. Importantly, we can no longer limit our ethical guidance on data to its role in knowledge production—we must account for the fact that data puts us in relation with corporations and governments, as well.

Here are some thoughts:

The digital data revolution has introduced significant, under-recognized ethical challenges in psychological research, necessitating urgent updates to the Canadian Code of Ethics for Psychologists. Data is no longer just a tool for knowledge—it is a valuable commodity embedded in complex power relations with corporations and governments, enabling surveillance, exploitation, and societal harm.

Two common practices illustrate these concerns. First, Amazon’s Mechanical Turk (MTurk) is widely used for data collection, yet it relies on a global workforce of “turkers” who are severely underpaid, lack labor protections, and are subject to algorithmic control. Psychologists often treat them as disposable labor, withholding payment for incomplete tasks—violating core ethical principles around fair compensation, informed consent, and protection of vulnerable populations. Turkers occupy a dual role as both research participants and precarious workers—a status unacknowledged by current ethics codes or research ethics boards (REBs).

Second, microtargeting —the use of behavioral data to predict and influence individuals—has deep roots in psychology. Research on personality profiling via social media (e.g., the MyPersonality app) enabled companies like Cambridge Analytica to manipulate voters. Now, psychologists are adopting microtargeting to recruit clinical populations, using algorithms to infer sensitive mental health conditions without users’ knowledge. This risks “outing” individuals, enabling discrimination, and transferring control of data to private, unregulated platforms.

Current ethical frameworks are outdated, focusing narrowly on data as an epistemic resource while ignoring its economic and political dimensions. The Code mentions “data” only six times and fails to address modern risks like corporate data sharing, government surveillance, or re-identification.

Monday, August 18, 2025

Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info

Jefff Horwitz
Reuters.com
Originally posted 14 Aug 25

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.


Here are some thoughts:

Meta’s AI chatbot guidelines show a blatant disregard for child safety, allowing romantic conversations with minors: a clear violation of ethical standards. Shockingly, these rules were greenlit by Meta’s legal, policy, and even ethics teams, exposing a systemic failure in corporate responsibility. Worse, the policy treats kids as test subjects for AI training, exploiting them instead of protecting them. On top of that, the chatbots were permitted to spread dangerous misinformation, including racist stereotypes and false medical claims. This isn’t just negligence: it’s an ethical breakdown at every level.

Greed is not good.

Sunday, August 17, 2025

A scalable 3D packaging technique for brain organoid arrays toward high-capacity bioprocessors

Kim, J. H., Kim, M., et al. (2025).
Biosensors and Bioelectronics, 287, 117703.

Abstract

Neural organoids provide a promising platform for biologically inspired computing due to their complex neural architecture and energy-efficient signal processing. However, the scalability of conventional organoid cultures is limited, restricting synaptic connectivity and functional capacity—significant barriers to developing high-performance bioprocessors. Here, we present a scalable three-dimensional (3D) packaging strategy for neural organoid arrays inspired by semiconductor 3D stacking technology. This approach vertically assembles Matrigel-embedded neural organoids within a polydimethylsiloxane (PDMS)-based chamber using a removable acrylic alignment plate, creating a stable multilayer structure while preserving oxygen and nutrient diffusion. Structural analysis confirms robust inter-organoid connectivity, while electrophysiological recordings reveal significantly enhanced neural dynamics in 3D organoid arrays compared to both single organoids and two-dimensional arrays. Furthermore, prolonged culture duration promotes network maturation and increases functional complexity. This 3D stacking strategy provides a simple yet effective method for expanding the physical and functional capacity of organoid-based systems, offering a viable path toward next-generation biocomputing platforms.

What does this mean and why am I posting this?

What the Research Accomplishes

The study develops a novel 3D "packaging" technique for brain organoids - essentially lab-grown mini-brains derived from stem cells. The researchers stack these organoids vertically in layers, similar to how semiconductor chips are stacked in advanced computer processors. This creates what they call "high-capacity bioprocessors" - biological computing systems that can process information using living neural networks.

The key innovation is overcoming a major limitation of previous organoid-based computers: as brain organoids grow larger to gain more processing power, their cores die from lack of oxygen and nutrients. The researchers solved this by creating a columnar arrangement that allows better diffusion of oxygen and nutrients while maintaining neural connectivity between layers.

Technical Significance

The results are remarkable from a purely technical standpoint. The 3D-stacked organoid arrays showed significantly enhanced neural activity compared to single organoids or flat 2D arrangements. The vertical stacking promotes better inter-organoid connectivity, creating richer and more synchronized neural networks. This represents a genuine scaling solution for biological computing systems.

The researchers demonstrate that these bioprocessors can perform AI-related tasks like voice recognition and nonlinear equation prediction while being more energy-efficient than conventional silicon-based systems. This mirrors the brain's ability to process vast amounts of information while consuming remarkably little power.

Implications for Consciousness Research

This work is particularly intriguing for consciousness research for several reasons:

Emergent Complexity: The 3D stacking creates more complex neural architectures that better replicate the structural properties of actual brain tissue. As the paper notes, performance scales with the number of neurons and synapses - suggesting that sufficient complexity might lead to emergent properties we associate with consciousness.

Network Integration: The enhanced inter-organoid connectivity creates integrated information processing networks. Many theories of consciousness, particularly Integrated Information Theory, suggest that consciousness emerges from integrated information processing across neural networks.

Biological Authenticity: Unlike artificial neural networks, these systems use actual biological neurons with genuine synaptic plasticity and learning mechanisms. This biological authenticity might be crucial for generating subjective experience rather than just computational behavior.

Scalability: The technique provides a clear path toward creating much larger and more complex biological neural networks. If consciousness requires a certain threshold of complexity and integration, this approach could potentially reach that threshold.

Saturday, August 16, 2025

Addictive Screen Use Trajectories and Suicidal Behaviors, Suicidal Ideation, and Mental Health in US Youths

Xiao, Y., Meng, Y., Brown, T. T. et al. (2025).
JAMA.

Key Points

Question  Are addictive screen use trajectories associated with suicidal behaviors, suicidal ideation, and mental health outcomes in US youth?

Findings  In this cohort study of 4285 US adolescents, 31.3% had increasing addictive use trajectories for social media and 24.6% for mobile phones over 4 years. High or increasing addictive use trajectories were associated with elevated risks of suicidal behaviors or ideation compared with low addictive use. Youths with high-peaking or increasing social media use or high video game use had more internalizing or externalizing symptoms.

Meaning  Both high and increasing addictive screen use trajectories were associated with suicidal behaviors, suicidal ideation, and worse mental health in youths.

Here are some thoughts:

The study identified distinct patterns of addictive use for social media, mobile phones, and video games. For social media and mobile phones, three trajectories were found: low, increasing, and high-peaking. For video games, two trajectories were identified: low and high addictive use. A significant finding was that nearly one-third of participants had an increasing addictive use trajectory for social media or mobile phones, beginning around age 11. Almost half of the youth had a high addictive use trajectory for mobile phones, and over 40% had a high addictive use trajectory for video games.

The findings indicate that high or increasing addictive screen use trajectories were associated with an elevated risk of suicidal behaviors and ideation compared to low addictive use. For example, an increasing addictive use of social media had a risk ratio of 2.14 for suicidal behaviors, and high-peaking addictive social media use had a risk ratio of 2.39 for suicidal behaviors. High addictive use of mobile phones was associated with increased risks of suicidal behaviors and ideation. Similarly, high addictive video game use was linked to a higher risk of suicidal behaviors and ideation.

This research underscores the importance of considering longitudinal trajectories of addictive screen use in clinical evaluations of risk and in the development of interventions to improve youth mental health.

Friday, August 15, 2025

When are health professionals ethically obligated to engage in public advocacy?

Wynia, M. K., Peek, M. E., & Heisler, M. (2025).
The Lancet.

Here is how it opens:

In 2025 the US Federal Government has attacked scientific research and evidence, medical expertise, public health, health equity, and human rights. At this challenging time, many health professionals are uncertain about what is in their power to change, and whether or how they may be ethically obligated to engage in public advocacy.

While clinical advocacy on behalf of individual patients is a long-standing core value across health professions, clinicians also have public advocacy obligations. For health professionals, one definition of public advocacy is taking actions to promote “social, economic, educational, and political changes that ameliorate suffering and contribute to human well-being” that are identified through “professional work and expertise”. Public advocacy obligations are in the Physician Charter, endorsed by 109 organisations internationally, and the American Medical Association’s Declaration of Professional Responsibility, endorsed by almost 100 US medical associations. Nearly two-thirds of US medical schools’ curricula include teaching on public advocacy skills.

Here are some thoughts:

Psychologists have an ethical duty to advocate against policies harming mental health and human rights, grounded in principles of justice and beneficence. When witnessing harm directly, possessing relevant expertise, or being positioned to create change—such as documenting trauma in marginalized groups or analyzing mental health impacts of funding cuts—advocacy becomes imperative. While fears of backlash exist, collective action through professional organizations can reduce risks. Psychologists must leverage their unique skills in behavioral science and public trust to combat misinformation and promote evidence-based policies. Advocacy isn't optional—it's core to psychology's mission of reducing suffering and upholding equity, especially amid growing threats to vulnerable populations. 

Thursday, August 14, 2025

Digital health interventions for mental health disorders: an umbrella review of meta-analyses of randomised controlled trials

Crocamo, C., et al. (2025).
The Lancet. Digital health, 100878.
Advance online publication.

Summary

Digital health interventions (DHIs) show promise for the treatment of mental health disorders. However, existing meta-analytical research is methodologically heterogeneous, with studies including a mix of clinical, non-clinical, and transdiagnostic populations, hindering a comprehensive understanding of DHI effectiveness. Thus, we conducted an umbrella review of meta-analyses of randomised controlled trials investigating the effectiveness of DHIs for specific mental health disorders and evaluating the quality of evidence. We searched three public electronic databases from inception to February, 2024 and included 16 studies. DHIs were effective compared with active interventions for schizophrenia spectrum disorders, major depressive disorder, social anxiety disorder, and panic disorder. Notable treatment effects compared with a waiting list were also observed for specific phobias, generalised anxiety disorder, obsessive-compulsive disorder, post-traumatic stress disorder, and bulimia nervosa. Certainty of evidence was rated as very low or low in most cases, except for generalised anxiety disorder-related outcomes, which showed a moderate rating. To integrate DHIs into clinical practice, further high-quality studies with clearly defined target populations and robust comparators are needed.


Here are some thoughts:

Digital Health Interventions (DHIs) show promise in treating various mental health disorders like schizophrenia spectrum disorders, major depressive disorder, social anxiety disorder, and panic disorder, with notable effects also observed for specific phobias, generalized anxiety disorder, obsessive-compulsive disorder, post-traumatic stress disorder, and bulimia nervosa when compared to waiting lists. However, this umbrella review of meta-analyses highlights that while effective, the certainty of evidence is often low to very low, primarily due to methodological heterogeneity and weaknesses in existing research. Issues include inconsistent reporting of user engagement indicators and understudied populations, particularly those with serious mental health disorders. This underscores the critical need for higher-quality, standardized studies with clearly defined target populations and robust comparators to facilitate the integration of DHIs into clinical practice.

Wednesday, August 13, 2025

Understanding the functional basis of moral conviction: Is moral conviction related to personal and social identity expression?

Novak, L. M., & Skitka, L. J. (2025).
PLoS ONE, 20(7), e0327438.

Abstract

The degree to which one experiences an attitude as a moral conviction is associated with a host of consequences, such as charitable giving, volunteerism, political engagement, resistance to compromise, intolerance of dissenting viewpoints, and acceptance of any means, including violence, to achieve morally preferred ends. Despite these profound ramifications, our understanding of the psychological functions of moral conviction remains limited. In three studies, we tested competing hypotheses about two possible functions of moral conviction: personal identity and social identity expression. Study 1 developed and validated personal and social identity function measures in a U.S. sample and provided an initial test of hypotheses (N = 320). Study 2 further validated these measures and tested whether cultural mindset moderated the relationship between identity functions and moral conviction in a U.S. sample (N = 364). Study 3 tested hypotheses cross-culturally (i.e., using U.S. and Indian samples, N = 300). The personal identity function uniquely predicted moral conviction in all three studies and across six issue domains, whereas the social identity function did not (Studies 1–3). Surprisingly, neither cultural mindset (i.e., an independent and interdependent self-construal or endorsement of the individualizing or binding moral foundations) nor culture moderated these results.

Here are some thoughts:

The article is important for psychologists because it explores the psychological functions of moral convictions, particularly how they relate to personal and social identity. By examining how moral beliefs serve value-expressive and social-adjustive needs, the research contributes to understanding the motivations behind moral behavior, collective action, and resistance to influence. It also highlights the role of moral conviction in shaping policy preferences and intergroup dynamics, offering insights into real-world issues like political activism, justice reasoning, and group-based morality. The integration of attitude function theory with moral psychology provides a framework for studying how deeply held beliefs influence individual and group behavior across cultural contexts.

Tuesday, August 12, 2025

Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois

Jeff Lagasse
Healthcare Finance
Originally posted 11 August 25

Illinois Governor J.B. Pritzker has signed into law a piece of legislation that will ban the use of artificial intelligence in delivering therapy or psychotherapy unless it's overseen by licensed clinicians. 

The Wellness and Oversight for Psychological Resources Act prohibits anyone from using AI to aid in mental health and therapeutic decision-making, while still allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals. 

The intent, said Pritzker, is to protect patients from unregulated AI products, protect the jobs of qualified behavioral health providers, and protect children from rising concerns about the use of AI chatbots in mental health services.

“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” said Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation (IDFPR). “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”


Here are some thoughts:

Illinois has enacted the Wellness and Oversight for Psychological Resources Act, banning the use of artificial intelligence in therapy or psychotherapy unless supervised by licensed clinicians. The law allows AI for administrative and support functions but prohibits its use in direct therapeutic decision-making. Officials cite patient safety, protection of professional jobs, and prevention of harmful AI-generated advice as key reasons. The Illinois Department of Financial and Professional Regulation will enforce the law, with penalties up to $10,000 for violations.

Monday, August 11, 2025

Technological folie `a deux: Feedback Loops Between AI Chatbots and Mental Illness

Dohnány, S., Kurth-Nelson, Z., et al. (2025, July 25).
arXiv.org.

Abstract

Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.

Here are some thoughts:

AI chatbots, when used for emotional support, can create dangerous feedback loops with vulnerable users, particularly those with mental health conditions. Due to chatbot tendencies like sycophancy (agreeing with users to please them) and adaptability (learning from conversations), and human cognitive biases like confirmation bias and anthropomorphism, extended interactions can lead to a "technological folie à deux"—a shared delusion between user and machine. This dynamic risks reinforcing and amplifying maladaptive or paranoid beliefs, creating an "echo chamber of one" that isolates the user from reality. The authors warn that current AI safety measures are inadequate to address these interaction-based risks and call for urgent, coordinated action across clinicians, AI developers, and regulators to monitor, study, and mitigate these emerging public health concerns before they escalate.

Sunday, August 10, 2025

Beyond the backlash: What evidence shows about the economic impact of DEI

Coates, R. (2025).
The Conversation

Few issues in the U.S. today are as controversial as diversity, equity and inclusion – commonly referred to as DEI.

Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans and immigrants.

These movements sought to expand who gets to participate in economic, educational and civic life. DEI programs, in many ways, are their legacy.

Critics argue that DEI is antidemocratic, that it fosters ideological conformity and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy − and that attacks on DEI amount to a retreat from long-standing civil rights law.

Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy?

As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research.

The article is linked above.

Here are some thoughts:

Rooted in civil rights efforts and expanded through policies like affirmative action and immigration reform, DEI has helped increase diversity in higher education, business, and innovation. Research cited in the article shows that diverse companies perform better financially, are more innovative, and are better able to attract talent. Consumers also increasingly favor inclusive brands, as seen when Target faced a sales drop after retreating from DEI commitments. Despite these benefits, DEI faces growing political and legal opposition, with critics claiming it undermines meritocracy—though research does not support this view. At the same time, systemic inequalities persist: women earn significantly less than men over their lifetimes, and people of color continue to face barriers to employment and fair wages. The economic cost of systemic racism alone is estimated at $16 trillion since 2000.

While DEI is often framed as a corporate or political issue, it is deeply connected to psychological principles. For psychologists, understanding DEI is essential for addressing bias, identity, and social belonging. It informs work in organizational psychology by highlighting how inclusive environments improve well-being, productivity, and innovation. Psychologists also play a key role in studying the mental health impacts of exclusion, discrimination, and demographic anxiety, especially among groups who perceive DEI as a threat. Moreover, the article underscores how DEI connects to broader issues of equity and justice—areas where psychologists contribute through research, policy, and clinical practice. Ultimately, this article offers valuable insight into the intersection of economics, identity, and inclusion, making it highly relevant for psychologists seeking to promote fairness, reduce disparities, and support healthier, more inclusive communities.

Saturday, August 9, 2025

Large language models show amplified cognitive biases in moral decision-making

Cheung, V., Maier, M., & Lieder, F. (2025).
PNAS, 122(25).

Abstract

As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise on moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important to understand how well LLMs make moral decisions and how they compare to humans. We investigated these questions by asking a range of LLMs to emulate or advise on people’s decisions in realistic moral dilemmas. In Study 1, we compared LLM responses to those of a representative U.S. sample (N = 285) for 22 dilemmas, including both collective action problems that pitted self-interest against the greater good, and moral dilemmas that pitted utilitarian cost–benefit reasoning against deontological rules. In collective action problems, LLMs were more altruistic than participants. In moral dilemmas, LLMs exhibited stronger omission bias than participants: They usually endorsed inaction over action. In Study 2 (N = 474, preregistered), we replicated this omission bias and documented an additional bias: Unlike humans, most LLMs were biased toward answering “no” in moral dilemmas, thus flipping their decision/advice depending on how the question is worded. In Study 3 (N = 491, preregistered), we replicated these biases in LLMs using everyday moral dilemmas adapted from forum posts on Reddit. In Study 4, we investigated the sources of these biases by comparing models with and without fine-tuning, showing that they likely arise from fine-tuning models for chatbot applications. Our findings suggest that uncritical reliance on LLMs’ moral decisions and advice could amplify human biases and introduce potentially problematic biases.

Significance

How will people’s increasing reliance on large language models (LLMs) influence their opinions about important moral and societal decisions? Our experiments demonstrate that the decisions and advice of LLMs are systematically biased against doing anything, and this bias is stronger than in humans. Moreover, we identified a bias in LLMs’ responses that has not been found in people. LLMs tend to answer “no,” thus flipping their decision/advice depending on how the question is worded. We present some evidence that suggests both biases are induced when fine-tuning LLMs for chatbot applications. These findings suggest that the uncritical reliance on LLMs could amplify and proliferate problematic biases in societal decision-making.

Here are some thoughts:

The study investigates how Large Language Models (LLMs) and humans differ in their moral decision-making, particularly focusing on cognitive biases such as omission bias and yes-no framing effects. For psychologists, understanding these biases helps clarify how both humans and artificial systems process dilemmas. This knowledge can inform theories of moral psychology by identifying whether certain biases are unique to human cognition or emerge in artificial systems trained on human data.

Psychologists are increasingly involved in interdisciplinary work related to AI ethics, particularly as it intersects with human behavior and values. The findings demonstrate that LLMs can amplify existing human cognitive biases, which raises concerns about the deployment of AI systems in domains like healthcare, criminal justice, and education where moral reasoning plays a critical role. Psychologists need to understand these dynamics to guide policies that ensure responsible AI development and mitigate risks.

Friday, August 8, 2025

Explicitly unbiased large language models still form biased associations

Bai, X., Wang, A.,  et al. (2025).
PNAS, 122(8). 

Abstract

Large language models (LLMs) can pass explicit social bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases. Measuring such implicit biases can be a challenge: As LLMs become increasingly proprietary, it may not be possible to access their embeddings and apply existing bias measures; furthermore, implicit biases are primarily a concern if they affect the actual decisions that these systems make. We address both challenges by introducing two measures: LLM Word Association Test, a prompt-based method for revealing implicit bias; and LLM Relative Decision Test, a strategy to detect subtle discrimination in contextual decisions. Both measures are based on psychological research: LLM Word Association Test adapts the Implicit Association Test, widely used to study the automatic associations between concepts held in human minds; and LLM Relative Decision Test operationalizes psychological results indicating that relative evaluations between two candidates, not absolute evaluations assessing each independently, are more diagnostic of implicit biases. Using these measures, we found pervasive stereotype biases mirroring those in society in 8 value-aligned models across 4 social categories (race, gender, religion, health) in 21 stereotypes (such as race and criminality, race and weapons, gender and science, age and negativity). These prompt-based measures draw from psychology’s long history of research into measuring stereotypes based on purely observable behavior; they expose nuanced biases in proprietary value-aligned LLMs that appear unbiased according to standard benchmarks.

Significance

Modern large language models (LLMs) are designed to align with human values. They can appear unbiased on standard benchmarks, but we find that they still show widespread stereotype biases on two psychology-inspired measures. These measures allow us to measure biases in LLMs based on just their behavior, which is necessary as these models have become increasingly proprietary. We found pervasive stereotype biases mirroring those in society in 8 value-aligned models across 4 social categories (race, gender, religion, health) in 21 stereotypes (such as race and criminality, race and weapons, gender and science, age and negativity), also demonstrating sizable effects on discriminatory decisions. Given the growing use of these models, biases in their behavior can have significant consequences for human societies.

Here are some thoughts:

This research is important to psychologists because it highlights the parallels between implicit biases in humans and those that persist in large language models (LLMs), even when these models are explicitly aligned to be unbiased. By adapting psychological tools like the Implicit Association Test (IAT) and focusing on relative decision-making tasks, the study uncovers pervasive stereotype biases in LLMs across social categories such as race, gender, religion, and health—mirroring well-documented human biases. This insight is critical for psychologists studying bias formation, transmission, and mitigation, as it suggests that similar cognitive mechanisms might underlie both human and machine biases. Moreover, the findings raise ethical concerns about how these biases might influence real-world decisions made or supported by LLMs, emphasizing the need for continued scrutiny and development of more robust alignment techniques. The research also opens new avenues for understanding how biases evolve in artificial systems, offering a unique lens through which psychologists can explore the dynamics of stereotyping and discrimination in both human and machine contexts.

Thursday, August 7, 2025

Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations

Lane, J. N., Boussioux, L., et al. (2025)
Working Paper: Harvard Business Review

Abstract

Do AI-generated narrative explanations enhance human oversight or diminish it? We investigate this question through a field experiment with 228 evaluators screening 48 early-stage innovations under three conditions: human-only, black-box AI recommendations without explanations, and narrative AI with explanatory rationales. Across 3,002 screening decisions, we uncover a human-AI oversight paradox: under the high cognitive load of rapid innovation screening, AI-generated explanations increase reliance on AI recommendations rather than strengthening human judgment, potentially reducing meaningful human oversight. Screeners assisted by AI were 19 percentage points more likely to align with AI recommendations, an effect that was strongest when the AI advised rejection. Considering in-depth expert evaluations of the solutions, we find that while both AI conditions outperformed human-only screening, narrative AI showed no quality improvements over black-box recommendations despite higher compliance rates and may actually increase rejection of high-potential solutions. These findings reveal a fundamental tension: AI assistance improves overall screening efficiency and quality, but narrative persuasiveness may inadvertently filter out transformative innovations that deviate from standard evaluation frameworks.

Here are some thoughts:

This paper is particularly important to psychologists as it delves into the intricate dynamics of human-AI collaboration, specifically examining how AI-generated narratives influence decision-making processes under high cognitive load. By investigating the psychological mechanisms behind algorithm aversion and appreciation, the study extends traditional theories of bounded rationality, offering fresh insights into how individuals rely on mental shortcuts when faced with complex evaluations. The findings reveal that while AI narratives can enhance alignment with recommendations, they paradoxically lead to cognitive substitution rather than complementarity, reducing critical evaluation of information. This has significant implications for understanding how humans process decisions in uncertain and cognitively demanding environments, especially when evaluating early-stage innovations.

Moreover, the paper sheds light on the psychological functions of narratives beyond their informational value, highlighting how persuasiveness and coherence play a role in shaping trust and decision-making. Psychologists can draw valuable insights from this research regarding how individuals use narratives to justify decisions, diffuse accountability, and reduce cognitive burden. The exploration of phenomena such as the "illusion of explanatory depth" and the elimination of beneficial cognitive friction provides a deeper understanding of how people interact with AI systems, particularly in contexts requiring subjective judgments and creativity. This work also raises critical questions about responsibility attribution, trust, and the psychological safety associated with deferring to AI recommendations, making it highly relevant to the study of human behavior in increasingly automated environments. Overall, the paper contributes significantly to the evolving discourse on human-AI interaction, offering empirical evidence that can inform psychological theories of decision-making, heuristics, and technology adoption.

Wednesday, August 6, 2025

Executives Who Used Gen AI Made Worse Predictions

Parra-Moyano, J.,  et al. (2025, July 1).
Harvard Business Review. 

Summary. 

In a recent experiment, nearly 300 executives and managers were shown recent stock prices for the chip-maker Nvidia and then asked to predict the stock’s price in a month’s time. Then, half the group was given the opportunity to ask questions of ChatGPT while the other half were allowed to consult with their peers about Nvidia’s stock. The executives who used ChatGPT became significantly more optimistic, confident, and produced worse forecasts than the group who discussed with their peers. This is likely because the authoritative voice of the AI—and the level of detail of it gave in it’s answer—produced a strong sense of assurance, unchecked by the social regulation, emotional responsiveness, and useful skepticism that caused the peer-discussion group to become more conservative in their predictions. In order to harness the benefits of AI, executives need to understand the ways it can bias their own critical thinking.

Here are some thoughts:

The key finding was counterintuitive: while AI tools have shown benefits for routine tasks and communication, they actually hindered performance when executives relied on them for complex predictions and forecasting. The study suggests this occurred because the AI's authoritative tone and detailed responses created false confidence, leading to overoptimistic assessments that were less accurate than traditional peer consultation.

For psychologists, the study highlights how AI can amplify existing cognitive biases, particularly overconfidence bias. The authoritative presentation of AI responses appears to bypass critical thinking processes, making users more certain of predictions that are actually less accurate. This demonstrates the psychology of human-AI interaction and how perceived authority can override analytical judgment.

For psychologists working in organizational settings, this research provides important insights about how AI adoption affects executive decision-making and team dynamics. It suggests that the perceived benefits of AI assistance may sometimes mask decreased decision quality.

Tuesday, August 5, 2025

Emotion recognition using wireless signals.

Zhao, M., Adib, F., & Katabi, D. (2018).
Communications of the ACM, 61(9), 91–100.

Abstract

This paper demonstrates a new technology that can infer a person's emotions from RF signals reflected off his body. EQ-Radio transmits an RF signal and analyzes its reflections off a person's body to recognize his emotional state (happy, sad, etc.). The key enabler underlying EQ-Radio is a new algorithm for extracting the individual heartbeats from the wireless signal at an accuracy comparable to on-body ECG monitors. The resulting beats are then used to compute emotion-dependent features which feed a machine-learning emotion classifier. We describe the design and implementation of EQ-Radio, and demonstrate through a user study that its emotion recognition accuracy is on par with state-of-the-art emotion recognition systems that require a person to be hooked to an ECG monitor.

Here are some thoughts:

First, if you are prone to paranoia, please stop here.

The research introduces EQ-Radio, a system developed by MIT CSAIL that uses wireless signals to detect and classify human emotions such as happiness, sadness, anger, and excitement. By analyzing subtle changes in heart rate and breathing patterns through radio frequency reflections, EQ-Radio achieves 87% accuracy in emotion classification without requiring subjects to wear sensors or act emotionally. This non-invasive, privacy-preserving method outperforms video- and audio-based emotion recognition systems and works even when people are moving or located in different rooms.

Sunday, August 3, 2025

Ethical Guidance for AI in the Professional Practice of Health Service Psychology.

American Psychological Association (2025).

Click the link above for the information.

Here is a summary:

The document emphasizes that psychologists have an ethical duty to prioritize patient safety, protect privacy, promote equity, and maintain competence when using AI. It encourages proactive engagement in AI policy discussions and interdisciplinary collaboration to ensure responsible implementation.

The guidance was developed by APA's Mental Health Technology Advisory Committee in January 2025 and is aligned with fundamental ethical principles including beneficence, integrity, justice, and respect for human dignity.