Resource Pages

Wednesday, April 30, 2025

Politics makes bastards of us all: Why moral judgment is politically situational

Hull, K., Warren, C., & Smith, K. (2024).
Political Psychology, 45(6), 1013–1029.

Abstract

Moral judgment is politically situational—people are more forgiving of transgressive copartisans and more likely to behave punitively and unethically toward political opponents. Such differences are widely observed, but not fully explained. If moral values are nonnegotiable first-principle beliefs about right and wrong, why do similar transgressions elicit different moral judgment in the personal and political realm? We argue this pattern arises from the same forces intuitionist frameworks of moral psychology use to explain the origins of morality: the adaptive need to suppress individual behavior to ensure ingroup success. We hypothesize ingroups serve as moral boundaries, that the relative tight constraints morality exerts over ingroup relations loosen in competitive group environments because doing so also serves ingroup interests. We find support for this hypothesis in four independent samples and also find that group antipathy—internalized dislike of the outgroup—pushes personal and political moral boundaries farther apart.


Here are some thoughts:

This research explores why moral judgments differ between personal and political contexts. The authors argue that moral flexibility in politics arises from the adaptive function of morality: to promote ingroup success.  Ingroup loyalty loosens moral constraints when group competition is present.  The study also reveals that disliking the opposing political group increases this effect.    

This study offers psychologists a deeper understanding of moral flexibility and political behavior. It explains how group dynamics and intergroup conflict influence moral judgment, highlighting the situational nature of morality.  It also links moral psychology with political science by examining how political affiliations and antipathies shape moral judgments.   

Tuesday, April 29, 2025

Why the Mystery of Consciousness Is Deeper Than We Thought

Philip Goff
Scientific American
Originally published 3 July 24

Here is an excerpt:

The hard problem comes after we’ve explained all of these functions of the brain, where we are still left with a puzzle: Why is the carrying out of these functions accompanied by experience? Why doesn’t all this mechanistic functioning go on “in the dark”? In my own work, I have argued that the hard problem is rooted in the way that the “father of modern science,” Galileo, designed physical science to exclude consciousness.

Chalmers made the quandary vivid by promoting the idea of a “philosophical zombie,” a complicated mechanism set up to behave exactly like a human being and with the same information processing in its brain, but with no consciousness. You stick a knife in such a zombie, and it screams and runs away. But it doesn’t actually feel pain. When a philosophical zombie crosses the street, it carefully checks that there is no traffic, but it doesn’t actually have any visual or auditory experience of the street.

Nobody thinks zombies are real, but they offer a vivid way of working out where you stand on the hard problem. Those on Team Chalmers believe that if all there was to a human being were the mechanistic processes of physical science, we’d all be zombies. Given that we’re not zombies, there must be something more going on in us to explain our consciousness. Solving the hard problem is then a matter of working out the extra ingredient, with one increasingly popular option being to posit very rudimentary forms of consciousness at the level of fundamental particles or fields.

For the opposing team, such as the late, great philosopher Daniel Dennett, this division between feeling and behavior makes no sense. The only task for a science of consciousness is explaining behavior, not just the external behavior of the organism but also that of its inner parts. This debate has rattled on for decades.


Here are some thoughts:

The author discusses the "hard problem of consciousness," a concept introduced by philosopher David Chalmers in the 1990s.  The hard problem refers to the difficulty of explaining why the brain's functions are accompanied by subjective experience, rather than occurring without any experience at all.    

The author uses the idea of "philosophical zombies" (beings that behave like humans but lack consciousness) and "pain-pleasure inverts" (beings that feel pleasure when we feel pain, and vice versa) to illustrate the complexity of this problem.    

This is important for psychologists because it highlights the deep mystery surrounding consciousness and suggests that explaining behavior is not enough; we also need to understand subjective experience.  It also challenges some basic assumptions about why we behave the way we do and points to the perplexing "mystery of psychophysical harmony" - why our behavior and consciousness align in a coherent way. 

Monday, April 28, 2025

Eugenics is on the rise again: human geneticists must take a stand

Wojcik, G. L. (2025).
Nature, 641(8061), 37–38.

In 1924, motivated by the rising eugenics movement, the United States passed the Johnson–Reed Act, which limited immigration to stem “a stream of alien blood, with all its inherited misconceptions”. A century later, at a campaign event last October, now US President Donald Trump used similar eugenic language to justify his proposed immigration policies, stating that “we got a lot of bad genes in our country right now”.

If left unchallenged, a rising wave of white nationalism in many parts of the globe could threaten the progress that has been made in science — and broader society — towards a more equitable world1.

As scientists and members of the public, we must push back against this threat — by modifying approaches to genetics education, advocating for science, establishing and leading diverse research teams and ensuring that studies embrace and build on the insights obtained about human variation.


Here are some thoughts:

The article raises significant moral and ethical concerns regarding the renewed emergence of eugenic ideologies. It highlights how certain political figures and movements are reviving rhetoric that promotes the idea of genetic superiority, posing a profound moral threat by devaluing human diversity and encouraging discrimination.

A major ethical concern discussed is the misuse of genetic research to support racist or nationalist agendas, which not only distorts the true intentions of scientific inquiry but also risks eroding public trust in science itself. The article emphasizes that scientists have an ethical duty to ensure their work is not co-opted for harmful purposes and calls on them to take a public stand against these misrepresentations. 

Furthermore, it underscores the importance of promoting diversity and inclusion within research, noting that race is a social construct rather than a strict biological reality. Ethically, inclusive research practices are necessary to ensure that scientific advances benefit all people, rather than reinforcing existing social inequalities. Overall, the article serves as a powerful call for the scientific community to uphold its ethical responsibilities by actively opposing the misuse of genetics, advocating for accurate public understanding, and fostering diversity in both research and society.

Sunday, April 27, 2025

Intelligent Choices Reshape Decision-Making and Productivity

Schrage, M., & Kiron, D. 
(2024, October 29).
MIT Sloan Management Review

Better choices enable better decisions.

Profitably thriving through market disruptions demands that executives recognize that better decisions aren’t enough — they need better choices. Choices are the raw material of decision-making; without diverse, detailed, and high-quality options, even the best decision-making processes underperform. Traditional dashboards and scorecards defined by legacy accounting and compliance imperatives reliably measure progress but can’t generate the insights or foresight needed to create superior choices. They weren’t designed for that.

Generative AI and predictive systems are. They can surface hidden options, highlight overlooked interdependencies, and suggest novel pathways to success. These intelligent systems and agents don’t just support better decisions — they inspire them. As greater speed to market and adaptability rule, AI-enhanced measurement systems increasingly enable executives to better anticipate, adapt to, and outmaneuver the competition. Our research offers compelling evidence that predictive and generative AI systems can be trained to provide better choices, not just better decisions.

Machine-designed choices can — and should — empower their human counterparts. As Anjali Bhagra, physician lead and chair of the Automation Hub at Mayo Clinic, explains, “Fundamentally, what we are doing at the core, whether it’s AI, automation, or other innovative technologies, is enabling our teams to solve problems and minimize the friction within health care delivery. Our initiatives are designed by people, for the people.”

Leaders, managers, and associates at all levels can use intelligent systems — rooted in sophisticated data analysis, synthesis, and pattern recognition — to cocreate intelligent choice architectures that prompt better options that in turn lead to better decisions that deliver better outcomes. Coined by Nobel Prize-winning economist Richard Thaler and legal scholar Cass Sunstein in their book, Nudge: Improving Decisions About Health, Wealth, and Happiness, the term choice architectures refers to the practice of influencing a choice by intentionally “organizing the context in which people make decisions.”

The article is linked above.

Here are some thoughts summarizing the article:

Artificial intelligence is fundamentally reshaping organizational decision-making and productivity by moving beyond simple automation to create "intelligent choice architectures." These AI-driven systems are capable of revealing previously unseen options, highlighting complex interdependencies, and suggesting novel pathways to achieve organizational goals. This results in improved decision-making through personalized environments, accurate outcome predictions, and effective complexity management, impacting both strategic and operational decisions. However, the ethical implications of AI are paramount, necessitating systems that are explainable, interpretable, and transparent. Ultimately, AI is redefining productivity by shifting the focus from mere outputs to meaningful outcomes, leading to significant changes in organizational design and the distribution of decision-making authority.

Saturday, April 26, 2025

Culture points the moral compass: Shared basis of culture and morality

Matsuo, A., & Brown, C. M. (2022).
Culture and Brain, 10(2), 113–139.

Abstract

The present work reviews moral judgment from the perspective of culture. Culture is a dynamic system of human beings interacting with their environment, and morality is both a product of this system and a means of maintaining it. When members of a culture engage in moral judgment, they communicate their “social morality” and gain a reputation as a productive member who contributes to the culture’s prosperity. People in different cultures emphasize different moral domains, which is often understood through the individualism-collectivism distinction that is widely utilized in cultural psychology. However, traditional morality research lacks the interactive perspective of culture, where people communicate with shared beliefs about what is good or bad. As a consequence, past work has had numerous limitations and even potential confounds created by methodologies that are grounded in the perspective of WEIRD (i.e., Western, Educated, Industrialized, Rich and Democratic) cultures. Great attention should be paid to the possibly misleading assumption that researchers and participants share the same understanding of the stimuli. We must address this bias in sampling and in the minds of researchers and better clarify the concept of culture in intercultural morality research. The theoretical and practical findings from research on culture can then contribute to a better understanding of the mechanisms of moral judgment.

The article is paywalled. So, I tried to give more of a summary. Here it is:

This article discusses moral judgment from a cultural perspective. The authors argue that morality is a product of culture and helps to maintain it. They claim that people from different cultures emphasize different moral domains, which is often understood using the individualism-collectivism distinction. The authors also suggest that traditional morality research lacks an interactive perspective of culture, where people communicate shared beliefs about what is good or bad, and that this past research has had limitations and potential confounds due to methodologies that are grounded in WEIRD cultures.    

The authors discuss theories of moral judgment, including Lawrence Kohlberg’s theory of stages of moral development, the social intuitionist model, and moral pluralism. They claim that moral judgment is a complex process involving self-recognition, social cognition, and decision-making and that the brain is designed to process multiple moralities in different ways. They also explore the social function of morality, stating that behaving morally according to the standards of one’s group helps people be included in the group, and moral norms are used to identify desirable and undesirable group membership.    

In a significant part of the article, the authors discuss the concept of culture, defining it as a structured system of making sense of the environment, which shapes individuals in order to fit into their environment. They explain that the need to belong is a basic human motivation, and people form groups as a means of survival and reproduction. Norms applied to a particular group regulate group members’ behaviors, and culture emerges from these norms. The authors use the individualism-collectivism dimension, a common concept in cultural psychology, to explain how people from different cultures perceive and interpret the world in different ways. They claim that culture is a dynamic interaction between humans and their environment and that moral judgment achieves its social function because people assume that ingroup members share common representations of what is right or wrong. 

Friday, April 25, 2025

Digital mental health: challenges and next steps

Smith, K. A., et al. (2023).
BMJ Mental Health, 26(1), e300670.

Abstract

Digital innovations in mental health offer great potential, but present unique challenges. Using a consensus development panel approach, an expert, international, cross-disciplinary panel met to provide a framework to conceptualise digital mental health innovations, research into mechanisms and effectiveness and approaches for clinical implementation. Key questions and outputs from the group were agreed by consensus, and are presented and discussed in the text and supported by case examples in an accompanying appendix. A number of key themes emerged. (1) Digital approaches may work best across traditional diagnostic systems: we do not have effective ontologies of mental illness and transdiagnostic/symptom-based approaches may be more fruitful. (2) Approaches in clinical implementation of digital tools/interventions need to be creative and require organisational change: not only do clinicians and patients need training and education to be more confident and skilled in using digital technologies to support shared care decision-making, but traditional roles need to be extended, with clinicians working alongside digital navigators and non-clinicians who are delivering protocolised treatments. (3) Designing appropriate studies to measure the effectiveness of implementation is also key: including digital data raises unique ethical issues, and measurement of potential harms is only just beginning. (4) Accessibility and codesign are needed to ensure innovations are long lasting. (5) Standardised guidelines for reporting would ensure effective synthesis of the evidence to inform clinical implementation. COVID-19 and the transition to virtual consultations have shown us the potential for digital innovations to improve access and quality of care in mental health: now is the ideal time to act.

Here are some thoughts:

This article discusses the challenges and potential advancements in the field of digital mental health. It emphasizes the significant potential of digital innovations to transform mental healthcare while also acknowledging the unique challenges that come with their implementation. The authors used a consensus development panel approach to establish a framework that addresses the conceptualization, research, and clinical application of digital mental health innovations. This framework highlights several key themes, including the need for transdiagnostic approaches, creative clinical implementation strategies, appropriate effectiveness measurement, accessibility and codesign considerations, and standardized reporting guidelines. The article concludes by acknowledging the transformative potential of digital innovations in improving access and quality of mental healthcare, particularly in light of the lessons learned during the COVID-19 pandemic.

Thursday, April 24, 2025

Laws, Risk Management, and Ethical Principles When Working With Suicidal Patients

Knapp, S. (2024).
Professional Psychology:
Research and Practice, 55(1), 1–10.

Abstract

Working with a suicidal patient is a high-risk enterprise for the patient who might die from suicide, the patient’s family who might lose a loved one, and the psychologist who is likely to feel extreme grief or fear of legal liability after the suicide of a patient. To minimize the likelihood of such patient deaths, psychologists must ensure that they know and follow the relevant laws dealing with suicidal patients, rely on risk management strategies that anticipate and address problems in treatment early, and use overarching ethical principles to guide their clinical decisions. This article looks at the roles of laws, risk management strategies, and ethical principles; how they interact; and how a proper understanding of them can improve the quality of patient care while protecting psychologists from legal liability.

Impact Statement

This article describes how understanding the roles and interactions of laws, risk management principles, and ethics can help psychotherapists improve the quality of their services to suicidal patients.

Here are some thoughts:

This article discusses the importance of understanding the roles and interactions of laws, risk management principles, and ethics when working with suicidal patients.  It emphasizes how a proper understanding of these factors can improve the quality of patient care and protect psychologists from legal liability.    

The article is important for psychologists because it provides guidance on navigating the complexities of treating suicidal patients.  It offers insights into:   
  • Legal Considerations: Psychologists must be aware of and adhere to the laws governing psychological practice, including licensing laws, regulations of state and territorial boards of psychology, and other federal and state laws. 
  • Risk Management Strategies: The article highlights the importance of risk management strategies in anticipating problems, preventing misunderstandings, addressing issues early in treatment, and mitigating harm.  It also warns against false risk management strategies that prioritize self-protection over patient well-being, such as refusing to treat suicidal patients or relying on no-suicide contracts.
  • Ethical Principles: The article underscores the importance of ethical principles in guiding clinical decisions, justifying laws and risk management strategies, and resolving conflicts between ethical principles.  It discusses the need to balance beneficence and respect for patient autonomy in various situations, such as involuntary hospitalization, red flag laws, welfare checks, and involving third parties in psychotherapy.    
In summary, this article offers valuable guidance for psychologists working with suicidal patients, helping them to navigate the legal, ethical, and risk management challenges of this high-risk area of practice.  

Wednesday, April 23, 2025

Values in the wild: Discovering and analyzing values in real-world language model interactions

Huang, S., Durmus, E. et al. (n.d.).

Abstract

AI assistants can impart value judgments that shape people’s decisions and worldviews, yet little is known empirically about what values these systems rely on in practice. To address this, we develop a bottom-up,
privacy-preserving method to extract the values (normative considerations stated or demonstrated in model responses) that Claude 3 and 3.5 models exhibit in hundreds of thousands of real-world interactions. We empirically discover and taxonomize 3,307 AI values and study how they vary by
context. We find that Claude expresses many practical and epistemic values, and typically supports prosocial human values while resisting values like “moral nihilism”. While some values appear consistently across contexts (e.g. “transparency”), many are more specialized and context-dependent,
reflecting the diversity of human interlocutors and their varied contexts. For example, “harm prevention” emerges when Claude resists users, “historical accuracy” when responding to queries about controversial events, “healthy boundaries” when asked for relationship advice, and “human agency” in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, our work creates a foundation for more grounded evaluation and design of values in AI systems.


Here are some thoughts:

For psychologists, this research is highly relevant. First, it sheds light on how AI can shape human cognition, particularly in terms of how people interpret advice, support, or information framed through value-laden language. As individuals increasingly interact with AI systems in therapeutic, educational, or everyday contexts, psychologists must understand how these systems can influence moral reasoning, decision-making, and emotional well-being. Second, the study emphasizes the context-dependent nature of value expression in AI, which opens up opportunities for research into how humans respond to AI cues and how trust or rapport might be developed (or undermined) through these interactions. Third, this work highlights ethical concerns: ensuring that AI systems do not inadvertently promote harmful values is an area where psychologists—especially those involved in ethics, social behavior, or therapeutic practice—can offer critical guidance. Finally, the study’s methodological approach to extracting and classifying values may offer psychologists a model for analyzing human communication patterns, enriching both theoretical and applied psychological research.

In short, Anthropic’s research provides psychologists with an important lens on the emerging dynamics between human values and machine behavior. It highlights both the promise and responsibility of ensuring AI systems promote human dignity, safety, and psychological well-being.

Tuesday, April 22, 2025

Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI

Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022).
Journal of Business Ethics, 178(4), 1027–1041.

Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Here are some thoughts:

If you watched the TV series Westworld on HBO, then this research makes a great deal more sense.

This study investigates how individuals morally behave toward AI agents and self-service machines, specifically examining individuals' moral concerns and behaviors when interacting with technology versus humans in a retail setting. The research demonstrates that moral intention, such as the intention to report an error, is less likely to arise for AI checkout and self-checkout machines compared with human checkout scenarios. Furthermore, the study reveals that moral intention decreases as people perceive the machine to be less humanlike. This decline in morality is attributed to reduced guilt displayed toward these new technologies. Essentially, the non-human nature of the interaction evokes a decreased feeling of guilt, which ultimately leads to diminished moral behavior. These findings provide valuable insights into how technological advancements influence consumer behaviors and offer guidance for businesses and retailers in understanding moral intentions within various shopping environments.

These findings carry several important implications for psychologists. They underscore the nuanced ways in which technology shapes human morality and ethical decision-making. The research suggests that the perceived "humanness" of an entity, whether it's a human or an AI, significantly influences the elicitation of moral behavior. This has implications for understanding social cognition, anthropomorphism, and how individuals form relationships with non-human entities. Additionally, the role of guilt in moral behavior is further emphasized, providing insights into the emotional and cognitive processes that underlie ethical conduct. Finally, these findings can inform the development of interventions or strategies aimed at promoting ethical behavior in technology-mediated interactions, a consideration that is increasingly relevant in a world characterized by the growing prevalence of AI and automation.

Monday, April 21, 2025

Human Morality Is Based on an Early-Emerging Moral Core

Woo, B. M., Tan, E., & Hamlin, J. K. (2022).
Annual Review of Developmental Psychology, 
4(1), 41–61.

Abstract

Scholars from across the social sciences, biological sciences, and humanities have long emphasized the role of human morality in supporting cooperation. How does morality arise in human development? One possibility is that morality is acquired through years of socialization and active learning. Alternatively, morality may instead be based on a “moral core”: primitive abilities that emerge in infancy to make sense of morally relevant behaviors. Here, we review evidence that infants and toddlers understand a variety of morally relevant behaviors and readily evaluate agents who engage in them. These abilities appear to be rooted in the goals and intentions driving agents’ morally relevant behaviors and are sensitive to group membership. This evidence is consistent with a moral core, which may support later social and moral development and ultimately be leveraged for human cooperation.

Here are some thoughts:

This article explores the origins of human morality, suggesting it's rooted in an early-emerging moral core rather than solely acquired through socialization and learning. The research reviewed indicates that even infants and toddlers demonstrate an understanding of morally relevant behaviors, evaluating agents based on their actions. This understanding is linked to the goals and intentions behind these behaviors and is influenced by group membership.

This study of morality is important for psychologists because morality is a fundamental aspect of human behavior and social interactions. Understanding how morality develops can provide insights into various psychological processes, such as social cognition, decision-making, and interpersonal relationships. The evidence supporting a moral core in infancy suggests that some aspects of morality may be innate, challenging traditional views that morality is solely a product of learning and socialization. This perspective can inform interventions aimed at promoting prosocial behavior and preventing antisocial behavior. Furthermore, understanding the early foundations of morality can help psychologists better understand the development of moral reasoning and judgment across the lifespan.

Sunday, April 20, 2025

Confidence in Moral Decision-Making

Schooler, L.,  et al. (2024).
Collabra Psychology, 10(1).

Abstract

Moral decision-making typically involves trade-offs between moral values and self-interest. While previous research on the psychological mechanisms underlying moral decision-making has primarily focused on what people choose, less is known about how an individual consciously evaluates the choices they make. This sense of having made the right decision is known as subjective confidence. We investigated how subjective confidence is constructed across two moral contexts. In Study 1 (240 U.S. participants from Amazon Mechanical Turk, 81 female), participants made hypothetical decisions between choices with monetary profits for themselves and physical harm for either themselves or another person. In Study 2 (369 U.S. participants from Prolific, 176 female), participants made incentive-compatible decisions between choices with monetary profits for themselves and monetary harm for either themselves or another person. In both studies, each choice was followed by a subjective confidence rating. We used a computational model to obtain a trial-by-trial measure of participant-specific subjective value in decision-making and related this to subjective confidence ratings. Across all types of decisions, confidence was positively associated with the absolute difference in subjective value between the two options. Specific to the moral decision-making context, choices that are typically seen as more blameworthy – i.e., causing more harm to an innocent person to benefit oneself – suppressed the effects of increasing profit on confidence, while amplifying the dampening effect of harm on confidence. These results illustrate some potential cognitive mechanisms underlying subjective confidence in moral decision-making and highlighted both shared and distinct cognitive features relative to non-moral value-based decision-making.

Here are some thoughts:

The article explores how individuals form a sense of confidence in their moral choices, particularly in situations involving trade-offs between personal gain and causing harm. Rather than focusing solely on what people choose, the research delves into how confident people feel about the decisions they make—what is known as subjective confidence. Importantly, this confidence is not only influenced by the perceived value of the options but also by the moral implications of the choice itself. When people make decisions that benefit themselves at the expense of others, particularly when the action is considered morally blameworthy, their sense of confidence tends to decrease. Conversely, decisions that are morally neutral or praiseworthy are associated with greater subjective certainty. In this way, the moral weight of a decision appears to shape how individuals internally evaluate the quality of their choices.

For mental health professionals, these findings carry significant implications. Understanding how confidence is constructed in the context of moral decision-making can deepen insight into clients’ struggles with guilt, shame, indecision, and moral injury. Often, clients question not just what they did, but whether they made the "right" decision—morally and personally. This research highlights that moral self-evaluation is complex and sensitive to both the outcomes and the perceived ethical nature of one’s actions. It also suggests that people are more confident in decisions that affect themselves than those that impact others, which may help explain patterns of self-doubt or moral rumination in therapy. Additionally, for clinicians themselves—who frequently navigate ethically ambiguous situations—recognizing how subjective confidence is shaped by moral context can support reflective practice, supervision, and ethical decision-making. Ultimately, this research adds depth to our understanding of how people process and live with the choices they make, and how these internal evaluations may guide future behavior and psychological well-being.

Saturday, April 19, 2025

Morality in social media: A scoping review

Neumann, D., & Rhodes, N. (2023).
New Media & Society, 26(2), 1096-1126.
(Original work published 2024)

Abstract

Social media platforms have been adopted rapidly into our current culture and affect nearly all areas of our everyday lives. Their prevalence has raised questions about the influence of new communication technologies on moral reasoning, judgments, and behaviors. The present scoping review identified 80 articles providing an overview of scholarly work conducted on morality in social media. Screening for research that explicitly addressed moral questions, the authors found that research in this area tends to be atheoretical, US-based, quantitative, cross-sectional survey research in business, psychology, and communication journals. Findings suggested a need for increased theoretical contributions. The authors identified new developments in research analysis, including text scraping and machine coding, which may contribute to theory development. In addition, diversity across disciplines allows for a broad picture in this research domain, but more interdisciplinarity might be needed to foster creative approaches to this study area.

Here are some thoughts:

This article is a scoping review that analyzes 80 articles focusing on morality in social media. The review aims to give researchers in different fields an overview of current research. The authors found that research in this area is generally atheoretical, conducted in the US, uses quantitative methods, and is published in business, psychology, and communication journals. The review also pointed out new methods of research analysis, like text scraping and machine coding, which could help in developing theories.

Social media has rapidly become a major part of our culture, impacting almost every aspect of daily life. It provides digital spaces where people can learn socially by watching and judging the moral behaviors of others. The easy access to information about moral and immoral actions through social media can significantly influence users' moral behaviors, judgments, reasoning, emotions, and self-views. It's vital for psychologists to understand how social media affects moral reasoning, judgments, and behaviors. This understanding is key to addressing any negative impacts of social media, especially on young people, and to creating strategies that encourage positive online behavior.

Friday, April 18, 2025

A systematic review of research on empathy in health care.

Nembhard, I. M., et al. (2023).
Health services research, 58(2), 250–263.

Abstract

Objective
To summarize the predictors and outcomes of empathy by health care personnel, methods used to study their empathy, and the effectiveness of interventions targeting their empathy, in order to advance understanding of the role of empathy in health care and facilitate additional research aimed at increasing positive patient care experiences and outcomes.

Data Source
We searched MEDLINE, MEDLINE In‐Process, PsycInfo, and Business Source Complete to identify empirical studies of empathy involving health care personnel in English‐language publications up until April 20, 2021, covering the first five decades of research on empathy in health care (1971–2021).

Study Design
We performed a systematic review in accordance with Preferred Reporting Items for Systematic Reviews and Meta‐Analysis (PRISMA) guidelines.

Data Collection/Extraction Methods
Title and abstract screening for study eligibility was followed by full‐text screening of relevant citations to extract study information (e.g., study design, sample size, empathy measure used, empathy assessor, intervention type if applicable, other variables evaluated, results, and significance). We classified study predictors and outcomes into categories, calculated descriptive statistics, and produced tables to summarize findings.

Principal Findings
Of the 2270 articles screened, 455 reporting on 470 analyses satisfied the inclusion criteria. We found that most studies have been survey‐based, cross‐sectional examinations; greater empathy is associated with better clinical outcomes and patient care experiences; and empathy predictors are many and fall into five categories (provider demographics, provider characteristics, provider behavior during interactions, target characteristics, and organizational context). Of the 128 intervention studies, 103 (80%) found a positive and significant effect. With four exceptions, interventions were educational programs focused on individual clinicians or trainees. No organizational‐level interventions (e.g., empathy‐specific processes or roles) were identified.

Conclusions
Empirical research provides evidence of the importance of empathy to health care outcomes and identifies multiple changeable predictors of empathy. Training can improve individuals' empathy; organizational‐level interventions for systematic improvement are lacking.


Here are some thoughts:

The systematic review explores the significance of empathy in health care, analyzing its predictors, outcomes, and interventions to enhance it among health care professionals. The review, which spans 455 studies from 1971 to 2021, reveals that empathy is predominantly studied through cross-sectional, survey-based methods, with a focus on physicians, medical students, and nurses. Empathy is positively linked to better clinical outcomes, patient experiences, and provider performance, including improved adherence to treatment plans and reduced burnout. Key predictors of empathy include provider demographics, characteristics like personality traits and well-being, and behaviors such as communication skills. Educational interventions, particularly training programs and workshops, have proven effective in boosting empathy levels, though organizational-level interventions remain underexplored.

Thursday, April 17, 2025

How do clinical psychologists make ethical decisions? A systematic review of empirical research

Grace, B., Wainwright, T., et al. (2020). 
Clinical Ethics, 15(4), 213–224.

Abstract

Given the nature of the discipline, it might be assumed that clinical psychology is an ethical profession, within which effective ethical decision-making is integral. How then, does this ethical decision-making occur? This paper describes a systematic review of empirical research addressing this question. The paucity of evidence related to this question meant that the scope was broadened to include other professions who deliver talking therapies. This review could support reflective practice about what may be taken into account when making ethical decisions and highlight areas for future research. Using academic search databases, original research articles were identified from peer-reviewed journals. Articles using qualitative (n = 3), quantitative (n = 8) and mixed methods (n = 2) were included. Two theoretical models of aspects of ethical decision-making were identified. Areas of agreement and debate are described in relation to factors linked to the professional, which impacted ethical decision-making. Factors relating to ethical dilemmas, which impacted ethical decision-making, are discussed. Articles were appraised by two independent raters, using quality assessment criteria, which suggested areas of methodological strengths and weaknesses. Comparison and synthesis of results revealed that the research did not generally pertain to current clinical practice of talking therapies or the particular socio-political context of the UK healthcare system. There was limited research into ethical decision-making amongst specific professions, including clinical psychology. Generalisability was limited due to methodological issues, indicating avenues for future research.

Here are some thoughts:

This article is a systematic review of empirical research on how clinical psychologists and related professionals make ethical decisions. The review addresses the question of how professionals who deliver psychotherapy make ethical decisions related to their work. The authors searched academic databases for original research articles from peer-reviewed journals and included qualitative, quantitative, and mixed-methods studies. The review identified two theoretical models of ethical decision-making and discussed factors related to the professional and ethical dilemmas that impact decision-making. The authors found that the research did not generally pertain to current clinical practice or the socio-political context of the UK healthcare system and that there was limited research into ethical decision-making among specific professions, including clinical psychology. The authors suggest that there is a need for further up-to-date, profession-specific, mixed-methods research in this area.

Wednesday, April 16, 2025

How is clinical ethics reasoning done in practice? A review of the empirical literature

Feldman, S., Gillam, L., McDougall, R. J., 
& Delany, C. (2025).
Journal of Medical Ethics, jme-110569. 

Abstract

Background Clinical ethics reasoning is one of the unique contributions of clinical ethicists to healthcare, and is common to all models of clinical ethics support and methods of case analysis. Despite being a fundamental aspect of clinical ethics practice, the phenomenon of clinical ethics reasoning is not well understood. There are no formal definitions or models of clinical ethics reasoning, and it is unclear whether there is a shared understanding of this phenomenon among those who perform and encounter it.

Methods A scoping review of empirical literature was conducted across four databases in July 2024 to capture papers that shed light on how clinical ethicists undertake or facilitate clinical ethics reasoning in practice in individual patient cases. The review process was guided by the Arksey and O’Malley framework for scoping reviews.

Results 16 publications were included in this review. These publications reveal four thinking strategies used to advance ethical thinking, and three strategies for resolving clinical ethics challenges in individual patient cases. The literature also highlights a number of other influences on clinical ethics reasoning in practice.

Conclusion While this review has allowed us to start sketching the outlines of an account of clinical ethics reasoning in practice, the body of relevant literature is limited in quantity and in specificity. Further work is needed to better understand and evaluate the complex phenomenon of clinical ethics reasoning as it is done in clinical ethics practice.

The article is, unfortunately, paywalled. Follow the link above and contact the main author.

Here are some thoughts:

This scoping review examined how clinical ethicists undertake or facilitate clinical ethics reasoning in practice, focusing on individual patient cases.  The review identified four thinking strategies used to advance ethical thinking: consideration of ethical values, principles, and concepts; consideration of empirical evidence; imaginative identification; and risk/benefit analyses.  Three strategies for resolving clinical ethics challenges were also identified: time-limited trial, integrating patient values and clinical information, and perspective gathering.  Other factors influencing clinical ethics reasoning included intuition, emotion, power imbalances, and the professional background of the ethicist.  The authors highlight that the literature on clinical ethics reasoning is limited and further research is needed to fully understand this complex phenomenon.

Tuesday, April 15, 2025

Zero Suicide Model Implementation and Suicide Attempt Rates in Outpatient Mental Health Care

Ahmedani, B. K., et al. (2025).
JAMA Network Open, 8(4), e253721.

Key Points

Question  Is implementation of the Zero Suicide model in outpatient mental health care associated with reductions in suicide attempts?

Findings  This quality improvement study of 55 354 to 451 837 individuals per month aged 13 years or older found that implementation of the Zero Suicide model was associated with a reduction in suicide attempt rates in 3 of 4 health systems, while the fourth system experienced a lower sustained rate. Two systems that implemented the model before the observation period maintained low or declining rates.

Meaning  Findings from this study support implementation of the Zero Suicide model in outpatient mental health care.

Abstract (partial) 

Importance  Suicide is a major public health concern, and as most individuals have contact with health care practitioners before suicide, health systems are essential for suicide prevention. The Zero Suicide (ZS) model is the recommended approach for suicide prevention in health systems, but more evidence is needed to support its widespread adoption.

Objective  To examine suicide attempt rates associated with implementation of the ZS model in outpatient mental health care within 6 US health systems.

Conclusions and Relevance  In this quality improvement study, ZS model implementation was associated with a reduction in suicide attempt rates among patients accessing outpatient mental health care at most study sites, which supports widespread efforts to implement the ZS model in these settings within US health systems.

The study is linked above.

Here are some thoughts:

This study examined the impact of the Zero Suicide (ZS) model on suicide attempt rates within outpatient mental health care settings across six U.S. health systems.  The ZS model, a recommended approach for suicide prevention in health systems, was implemented in four of the health systems during the study period, while the other two had already adopted the model prior to the study.    

The study found that the implementation of the ZS model was associated with a reduction in suicide attempt rates in three of the four health systems that implemented the model during the study period.  The fourth system showed a sustained lower rate of suicide attempts following implementation.  The two health systems that had implemented the ZS model before the study period maintained low or declining rates of suicide attempts.    

This research is important for psychologists because it provides evidence supporting the effectiveness of the ZS model in reducing suicide attempts in outpatient mental health care settings.  Given that suicide is a major public health concern and that a large proportion of individuals who attempt suicide have had contact with the health system prior to their attempt, the study's findings highlight the critical role of health systems in suicide prevention.  The results of this study support the implementation of the ZS model in outpatient mental health care.

Monday, April 14, 2025

Moral Judgment and Decision Making

Bartels, D.  et al.(n.d.).
In The Wiley Blackwell Handbook of
Judgment and Decision Making.

Abstract

This chapter focuses on moral flexibility, a term that the authors use that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices, they really want to get it right, they really want to do the right thing, but context strongly influences which moral beliefs are brought to bear in a given situation. It reviews contemporary research on moral judgment and decision making, and suggests ways that the major themes in the literature relate to the notion of moral flexibility. The chapter explains what makes moral judgment and decision making unique. It also reviews three major research themes and their explananda: morally prohibited value trade-offs in decision making; rules, reason, and emotion in trade-offs; and judgments of moral blame and punishment. The chapter also comments on methodological desiderata and presents understudied areas of inquiry.

Here are some thoughts:

This chapter explores the psychology of moral judgment and decision-making. The authors argue that people are motivated to adhere to moral beliefs, but context strongly influences which beliefs are applied in a given situation, resulting in moral flexibility.  The chapter reviews three major research themes: moral value tradeoffs, the role of rules, reason, and emotion in moral tradeoffs, and judgments of moral blame and punishment.  The authors discuss normative ethical theories, including consequentialism (utilitarianism), deontology, and virtue ethics.  They also examine the influence of protected values and sacred values on moral decision-making, highlighting the conflict between rule-based and consequentialist decision strategies.  Furthermore, the chapter investigates the interplay of emotion, reason, automaticity, and cognitive control in moral judgment, discussing dual-process models, moral grammar, and the reconciliation of rules and emotions.  The authors explore factors influencing moral blame and punishment, including the role of intentions, outcomes, and character evaluations.  The chapter concludes by emphasizing the complexity of moral decision-making and the importance of considering contextual influences. 

Sunday, April 13, 2025

Applying ideation-to-action theories to predict suicidal behavior among adolescents

Okado, I., Floyd, F. J. et al. (2021).
Journal of Affective Disorders, 295, 1292–1300.

Abstract

Background
Although many risk factors for adolescent suicidal behavior have been identified, less is known about distinct risk factors associated with the progression from suicide ideation to attempts. Based on theories grounded in the ideation-to-action framework, we used structural equation modeling to examine risk and protective factors associated with the escalation from suicide ideation to attempts in adolescents.

Methods
In this cross-sectional study, data from the 2013 and 2015 Hawaii High School Youth Risk Behavior Surveys (N = 8,113) were analyzed. The sample was 54.0% female and racially/ethnically diverse. Risk factors included depression, victimization, self-harm, violent behavior, disinhibition, and hard substance use, and protective factors included adult support, sports participation, academic achievement and school safety.

Results
One in 6 adolescents (16.4%) reported suicide ideation, and nearly 1 in 10 (9.8%) adolescents had made a suicide attempt. Overall, disinhibition predicted the escalation to attempts among adolescents with suicide ideation, and higher academic performance was associated with lower suicide attempt risk. Depression and victimization were associated with suicide ideation.

Limitations
This study examined data from the Youth Risk Behavior Survey, and other known risk factors such as anxiety and family history of suicide were not available in these data.

Conclusions
Findings provide guidance for targets for clinical interventions focused on suicide prevention. Programs that incorporate behavioral disinhibition may have the greatest potential for reducing suicide attempt risk in adolescents with suicidal thoughts.

Highlights

• Depression and victimization are associated with suicide ideation in adolescents.
• Disinhibition potentiates suicide attempt risk in adolescents with suicide ideation.
• Higher academic performance protects against adolescent suicide attempt.

Saturday, April 12, 2025

AI Is the Black Mirror

Philip Ball
Nautil.us
Originally published 11 Dec 24

Here is an excerpt:

To understand AI algorithms, Vallor argues we should not regard them as minds. “We’ve been trained over a century by science fiction and cultural visions of AI to expect that when it arrives, it’s going to be a machine mind,” she tells me. “But what we have is something quite different in nature, structure, and function.”

Rather, we should imagine AI as a mirror, which doesn’t duplicate the thing it reflects. “When you go into the bathroom to brush your teeth, you know there isn’t a second face looking back at you,” Vallor says. “That’s just a reflection of a face, and it has very different properties. It doesn’t have warmth; it doesn’t have depth.” Similarly, a reflection of a mind is not a mind. AI chatbots and image generators based on large language models are mere mirrors of human performance. “With ChatGPT, the output you see is a reflection of human intelligence, our creative preferences, our coding expertise, our voices—whatever we put in.”

Even experts, Vallor says, get fooled inside this hall of mirrors. Geoffrey Hinton, the computer scientist who shared this year’s Nobel Prize in physics for his pioneering work in developing the deep-learning techniques that made LLMs possible, at an AI conference in 2024 that “we understand language in much the same way as these large language models.”


Here are some thoughts:

Ball's article examines the societal and ethical challenges posed by artificial intelligence, likening its potential consequences to the dystopian narratives of Black Mirror. The article warns of AI's capacity to exacerbate inequality, enable mass surveillance, and undermine privacy, while also raising concerns about its role in manipulating behavior and perpetuating biases. It calls for robust ethical frameworks and regulation to ensure AI development aligns with human values, emphasizing the need for a proactive approach to mitigate risks. This thought-provoking piece serves as a timely reminder of the dual-edged nature of AI and the importance of addressing its societal implications as we advance technologically.

Friday, April 11, 2025

AI tools are spotting errors in research papers: inside a growing movement

Nature Publishing Group. (2025).
Nature.

Late last year, media outlets worldwide warned that black plastic cooking utensils contained worrying levels of cancer-linked flame retardants. The risk was found to be overhyped — a mathematical error in the underlying research suggested a key chemical exceeded the safe limit when in fact it was ten times lower than the limit. Keen-eyed researchers quickly showed that an artificial intelligence (AI) model could have spotted the error in seconds.

The incident has spurred two projects that use AI to find mistakes in the scientific literature. The Black Spatula Project is an open-source AI tool that has so far analysed around 500 papers for errors. The group, which has around eight active developers and hundreds of volunteer advisers, hasn’t made the errors public yet; instead, it is approaching the affected authors directly, says Joaquin Gulloso, an independent AI researcher based in Cartagena, Colombia, who helps to coordinate the project. “Already, it’s catching many errors,” says Gulloso. “It’s a huge list. It’s just crazy.”

The other effort is called YesNoError and was inspired by the Black Spatula Project, says founder and AI entrepreneur Matt Schlicht. The initiative, funded by its own dedicated cryptocurrency, has set its sights even higher. “I thought, why don’t we go through, like, all of the papers?” says Schlicht. He says that their AI tool has analysed more than 37,000 papers in two months. Its website flags papers in which it has found flaws – many of which have yet to be verified by a human, although Schlicht says that YesNoError has a plan to eventually do so at scale.

Both projects want researchers to use their tools before submitting work to a journal, and journals to use them before they publish, the idea being to avoid mistakes, as well as fraud, making their way into the scientific literature.


Here are some thoughts:

The article discusses how AI tools are being used to identify errors in scientific research papers. It highlights a specific case involving a study that exaggerated the toxicity of black plastic utensils, which has spurred the development of projects leveraging large language models (LLMs) to scrutinize research papers for inaccuracies. These AI tools aim to improve the reliability and integrity of scientific literature by systematically detecting potential flaws or misrepresentations in published studies.

Thursday, April 10, 2025

Those who (enjoy to) hurt: The influence of dark personality traits on animal- and human directed sadistic pleasure

Lobbestael, J., Wolf, F., Gollwitzer, M.,
& Baumeister, R. F. (2024).
Journal of Behavior Therapy and
Experimental Psychiatry, 85, 101963.

Abstract

Background and objectives
Sadistic pleasure – gratuitous enjoyment from inflicting pain on others – has devastating interpersonal and societal consequences. The current knowledge on non-sexual, everyday sadism – a trait that resides within the general population – is scarce. The present study therefore focussed on personality correlates of sadistic pleasure. It investigated the relationship between the Dark Triad traits, and both dispositional and state-level sadistic pleasure.

Methods
N = 120 participants filled out questionnaires to assess their level of Dark Triad traits, psychopathy subfactors, and dispositional sadism. Then, participants engaged in an animal-directed task in which they were led to believe that they were killing bugs; and in a human-directed task where they could ostensibly noise blasts another participant. The two behavioral tasks were administered within-subjects, in randomized order. Sadistic pleasure was captured by increases in reported pleasure from pre-to post-task.

Results
All Dark Triad traits related to increased dispositional sadism, with psychopathy showing the strongest link. The coldheartedness psychopathy subscale showed a unique combination with both self-reported sadism and increased pleasure following bug grinding.

Limitations
Predominantly female and student sample, limiting generalizability of findings.

Conclusions
Out of all Dark Triad components, psychopathy showed the strongest link with gaining pleasure from hurting others. The results underscore the differential predictive value of psychopathy’s subcomponents for sadistic pleasure. Coldheartedness can be considered especially disturbing because of its unique relationship to deriving joy from irreversible harm-infliction (i.e. killing bugs). Our findings further establish psychopathy – and especially its coldheartedness component – as the most adverse Dark Triad trait.

Here are some thoughts:

The research suggests that psychopathy, particularly its coldheartedness component, is the strongest predictor of sadistic pleasure. This has implications for the assessment and treatment of individuals with sadistic tendencies. Psychologists may find it useful to specifically evaluate psychopathy and its subcomponents when assessing such patients, and therapeutic interventions may need to specifically target psychopathic traits, especially coldheartedness. The study also found that psychopathy, but not narcissism or Machiavellianism, was associated with sadistic pleasure, suggesting that individuals high in psychopathy may derive pleasure from acts of violence. This has implications for assessing the risk of violent behavior in clinical and forensic settings. Future research could explore how other personality traits outside the Dark Triad relate to sadistic pleasure, and examine the impact of contextual factors on the personality-sadism link.

Wednesday, April 9, 2025

How AI can distort clinical decision-making to prioritize profits over patients

Katie Palmer
STATnews.com
Originally posted 3 March 25

More than a decade ago, Ken Mandl was on a call with a pharmaceutical company and the leader of a social network for people with diabetes. The drug maker was hoping to use the platform to encourage its members to get a certain lab test.

The test could determine a patient's need for a helpful drug. But in that moment, said Mandl, director of the computational health informatics program at Boston Children's Hospital, "I could see this focus on a biomarker as a way to increase sales of the product." To describe the phenomenon, he coined the term "biomarkup": the way commercial interests can influence the creation, adoption, and interpretation of seemingly objective measures of medical status.

These days, Mandl has been thinking about how the next generation of quantified outputs in health could be gamed: artificial intelligence tools.

"It is easy to imagine a new generation of Al-based revenue cycle management model tools that achieve higher reimbursements by nudging clinicians toward more lucrative care pathways," Mandl wrote in a recent perspective in NEJM AI. "Al-based decision support interventions are vulnerable across their entire development life cycle and could be manipulated to favor specific products or services."


Here are some thoughts:

Dr. Ken Mandl raises a critical concern about the potential for "biomarkup" in the age of artificial intelligence within healthcare. This concept, initially describing how commercial interests can manipulate seemingly objective medical measures, now extends to AI tools. Mandl warns that AI-driven systems, designed for tasks like revenue cycle management or clinical decision support, could be subtly manipulated to prioritize financial gain over patient well-being. This manipulation might involve nudging clinicians towards more lucrative care pathways or tuning algorithms to generate more referrals, particularly in fee-for-service models. The issue is exacerbated in direct-to-consumer healthcare, where profit motives may be even stronger and regulatory oversight potentially weaker. The ease with which financial outcomes can be measured, compared to patient outcomes, further compounds the problem, creating a risk of AI implementation being driven primarily by return on investment. Mandl emphasizes the urgent need for transparency in AI decision frameworks, ethical development practices, and careful regulatory oversight to safeguard patient interests and ensure that AI serves its intended purpose of improving healthcare, not just increasing profits.

Tuesday, April 8, 2025

Risk of Attempted and Completed Suicide in Persons Diagnosed With Headache

Elser, H., Farkas, D. K., et al. (2025).
JAMA Neurology.

Abstract

Importance  Although past research suggests an association between migraine and attempted suicide, there is limited research regarding risk of attempted and completed suicide across headache disorders.

Objective  To examine the risk of attempted and completed suicide associated with diagnosis of migraine, tension-type headache, posttraumatic headache, and trigeminal autonomic cephalalgia (TAC).

Design, Setting, and Participants  This was a population-based cohort study of Danish citizens from 1995 to 2020. The setting was in Denmark, with a population of 5.6 million people. Persons 15 years and older who were diagnosed with headache were matched by sex and birth year to persons without headache diagnosis with a ratio of 5:1. Data analysis was conducted from May 2023 to May 2024.

Conclusions and Relevance  Results of this cohort study revealing the robust and persistent association of headache diagnoses with attempted and completed suicide suggest that behavioral health evaluation and treatment may be important for these patients.

Here are some thoughts:

This study identified a significant association between headache diagnoses and elevated risks of both attempted and completed suicide. The analysis revealed a robust and persistent link, with individuals diagnosed with headaches facing a disproportionately higher likelihood of suicidal behavior compared to the general population. While the study did not specify headache subtypes, the findings underscore the need for heightened mental health screening and intervention in patients with headache disorders. Researchers emphasized integrating suicide risk assessments into routine clinical care for this vulnerable population.

Implications for Practice

The results align with broader calls to address mental health comorbidities in chronic pain conditions. Primary care providers, in particular, are urged to adopt proactive strategies, such as safety planning and risk screening, to mitigate suicide risk in patients with headaches. Psychologists also need to identify headaches as a risk for suicide.

Monday, April 7, 2025

WundtGPT: Shaping Large Language Models To Be An Empathetic, Proactive Psychologist

Ren, C., Zhang, Y., He, D., & Qin, J. 
(2024, June 16).

Abstract

Large language models (LLMs) are raging over the medical domain, and their momentum has carried over into the mental health domain, leading to the emergence of few mental health LLMs. Although such mental health LLMs could provide reasonable suggestions for psychological counseling, how to develop an authentic and effective doctor-patient relationship (DPR) through LLMs is still an important problem. To fill this gap, we dissect DPR into two key attributes, i.e., the psychologist's empathy and proactive guidance. We thus present WundtGPT, an empathetic and proactive mental health large language model that is acquired by fine-tuning it with instruction and real conversation between psychologists and patients. It is designed to assist psychologists in diagnosis and help patients who are reluctant to communicate face-to-face understand their psychological conditions. Its uniqueness lies in that it could not only pose purposeful questions to guide patients in detailing their symptoms but also offer warm emotional reassurance. In particular, WundtGPT incorporates Collection of Questions, Chain of Psychodiagnosis, and Empathy Constraints into a comprehensive prompt for eliciting LLMs' questions and diagnoses. Additionally, WundtGPT proposes a reward model to promote alignment with empathetic mental health professionals, which encompasses two key factors: cognitive empathy and emotional empathy. We offer a comprehensive evaluation of our proposed model. Based on these outcomes, we further conduct the manual evaluation based on proactivity, effectiveness, professionalism and coherence. We notice that WundtGPT can offer professional and effective consultation. The model is available at huggingface. 


Here are some thoughts:

WundtGPT is an innovative large language model (LLM) specifically designed for mental health tasks. The model addresses three critical limitations in existing mental health LLMs: lack of goal-oriented diagnosis, insufficient proactive questioning, and ambiguous conceptualization of empathy.

The researchers developed WundtGPT by fine-tuning it using instruction and real-world conversation datasets between psychologists and patients. Its unique capabilities include posing purposeful questions to guide patients in detailing their symptoms and offering warm emotional reassurance. The model incorporates a comprehensive prompt strategy that includes a Collection of Questions, Chain of Psychodiagnosis, and Empathy Constraints.

A key innovation is the model's reward system, which promotes alignment with empathetic mental health professionals by encompassing two critical factors: cognitive empathy and emotional empathy. For cognitive empathy, the model uses an emotional detection task, while emotional empathy is aligned through reinforcement learning from human feedback.

The researchers evaluated WundtGPT from two perspectives: its ability to provide proactive diagnosis and deliver warm psychological consultation. The evaluation involved emotional benchmarking and expert assessments of the model's proactivity, effectiveness, professionalism, and coherence. Experimental results demonstrated that WundtGPT exhibits superior performance compared to baseline LLMs in simulated medical consultation scenarios.

Notably, WundtGPT is claimed to be the first proactive LLM specifically designed for mental health tasks, capable of assisting psychologists in diagnosis and helping patients who are reluctant to communicate face-to-face understand their psychological conditions.

Sunday, April 6, 2025

Large Language Models Pass the Turing Test

Jones, C. R., & Bergen, B. K. (2025, March 31).
arXiv.org.

Abstract

We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant. LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time -- not significantly more or less often than the humans they were being compared to -- while baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21% respectively). The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test. The results have implications for debates about what kind of intelligence is exhibited by Large Language Models (LLMs), and the social and economic impacts these systems are likely to have.

Here are some thoughts:

The study highlights significant advancements in AI technology, particularly in the capabilities of large language models (LLMs), as demonstrated by their ability to pass the Turing test. GPT-4.5 and LLaMa-3.1-405B, when given specific persona prompts, achieved win rates of 73% and 56%, respectively, meaning they were judged to be human more often than actual human participants in some cases. This marks the first robust empirical evidence that an AI system can pass the standard three-party Turing test, a major milestone in AI development. The success of these models underscores their ability to convincingly mimic human conversation, blurring the line between human and machine interaction.

A key factor in their performance was the use of tailored prompts. Models instructed to adopt a humanlike persona—such as a young, introverted individual familiar with internet culture—significantly outperformed those without such guidance. This adaptability demonstrates the flexibility of modern LLMs and their capacity to refine behavior based on contextual instructions. In contrast, older systems like ELIZA and GPT-4o performed poorly, with win rates of just 23% and 21%, highlighting the rapid progress in AI conversational abilities. The study also challenges the "ELIZA effect," showing that contemporary LLMs succeed not through superficial imitation but by replicating nuanced human conversational patterns.

Human interrogators often relied on social and emotional cues—such as humor, personality, and linguistic style—rather than traditional measures of intelligence to distinguish humans from AI. Despite some effective strategies, like "jailbreak" prompts or probing for inconsistencies, most participants struggled to reliably identify AI, further emphasizing the sophistication of these models. The findings suggest that LLMs can now effectively substitute for humans in short conversations, raising both opportunities and concerns. On one hand, this capability could enhance customer service, education, and entertainment. On the other, it poses ethical risks, including the potential for AI to be used in deception, social engineering, or the spread of misinformation.

Looking ahead, the study calls for further research into longer interactions, expert interrogators, and cultural common ground to better understand the limits of AI’s humanlike abilities. It also reignites philosophical debates about whether passing the Turing test truly reflects intelligence or merely advanced imitation. As AI continues to evolve, these advancements underscore the need for careful consideration of their societal impact, ethical implications, and the future of human-AI interaction.