Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Biases. Show all posts
Showing posts with label Biases. Show all posts

Thursday, August 21, 2025

On the conversational persuasiveness of GPT-4

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025).
Nature Human Behaviour.

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Here are some thoughts:

This study is highly relevant to psychologists because it raises pressing ethical concerns and offers important implications for clinical and applied settings. Ethically, the research demonstrates that GPT-4 can use even minimal demographic data—such as age, gender, or political affiliation—to personalize persuasive arguments more effectively than human counterparts. This ability to microtarget individuals poses serious risks of manipulation, particularly when users may not be aware of how their personal information is being used. 

For psychologists concerned with informed consent, autonomy, and the responsible use of technology, these findings underscore the need for robust ethical guidelines governing AI-driven communication. 

Importantly, the study has significant relevance for clinical, counseling, and health psychologists. As AI becomes more integrated into mental health apps, health messaging, and therapeutic tools, understanding how machines influence human attitudes and behavior becomes essential. This research suggests that AI could potentially support therapeutic goals—but also has the capacity to undermine trust, reinforce bias, or sway vulnerable individuals in unintended ways.

Saturday, August 9, 2025

Large language models show amplified cognitive biases in moral decision-making

Cheung, V., Maier, M., & Lieder, F. (2025).
PNAS, 122(25).

Abstract

As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise on moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important to understand how well LLMs make moral decisions and how they compare to humans. We investigated these questions by asking a range of LLMs to emulate or advise on people’s decisions in realistic moral dilemmas. In Study 1, we compared LLM responses to those of a representative U.S. sample (N = 285) for 22 dilemmas, including both collective action problems that pitted self-interest against the greater good, and moral dilemmas that pitted utilitarian cost–benefit reasoning against deontological rules. In collective action problems, LLMs were more altruistic than participants. In moral dilemmas, LLMs exhibited stronger omission bias than participants: They usually endorsed inaction over action. In Study 2 (N = 474, preregistered), we replicated this omission bias and documented an additional bias: Unlike humans, most LLMs were biased toward answering “no” in moral dilemmas, thus flipping their decision/advice depending on how the question is worded. In Study 3 (N = 491, preregistered), we replicated these biases in LLMs using everyday moral dilemmas adapted from forum posts on Reddit. In Study 4, we investigated the sources of these biases by comparing models with and without fine-tuning, showing that they likely arise from fine-tuning models for chatbot applications. Our findings suggest that uncritical reliance on LLMs’ moral decisions and advice could amplify human biases and introduce potentially problematic biases.

Significance

How will people’s increasing reliance on large language models (LLMs) influence their opinions about important moral and societal decisions? Our experiments demonstrate that the decisions and advice of LLMs are systematically biased against doing anything, and this bias is stronger than in humans. Moreover, we identified a bias in LLMs’ responses that has not been found in people. LLMs tend to answer “no,” thus flipping their decision/advice depending on how the question is worded. We present some evidence that suggests both biases are induced when fine-tuning LLMs for chatbot applications. These findings suggest that the uncritical reliance on LLMs could amplify and proliferate problematic biases in societal decision-making.

Here are some thoughts:

The study investigates how Large Language Models (LLMs) and humans differ in their moral decision-making, particularly focusing on cognitive biases such as omission bias and yes-no framing effects. For psychologists, understanding these biases helps clarify how both humans and artificial systems process dilemmas. This knowledge can inform theories of moral psychology by identifying whether certain biases are unique to human cognition or emerge in artificial systems trained on human data.

Psychologists are increasingly involved in interdisciplinary work related to AI ethics, particularly as it intersects with human behavior and values. The findings demonstrate that LLMs can amplify existing human cognitive biases, which raises concerns about the deployment of AI systems in domains like healthcare, criminal justice, and education where moral reasoning plays a critical role. Psychologists need to understand these dynamics to guide policies that ensure responsible AI development and mitigate risks.

Friday, August 8, 2025

Explicitly unbiased large language models still form biased associations

Bai, X., Wang, A.,  et al. (2025).
PNAS, 122(8). 

Abstract

Large language models (LLMs) can pass explicit social bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases. Measuring such implicit biases can be a challenge: As LLMs become increasingly proprietary, it may not be possible to access their embeddings and apply existing bias measures; furthermore, implicit biases are primarily a concern if they affect the actual decisions that these systems make. We address both challenges by introducing two measures: LLM Word Association Test, a prompt-based method for revealing implicit bias; and LLM Relative Decision Test, a strategy to detect subtle discrimination in contextual decisions. Both measures are based on psychological research: LLM Word Association Test adapts the Implicit Association Test, widely used to study the automatic associations between concepts held in human minds; and LLM Relative Decision Test operationalizes psychological results indicating that relative evaluations between two candidates, not absolute evaluations assessing each independently, are more diagnostic of implicit biases. Using these measures, we found pervasive stereotype biases mirroring those in society in 8 value-aligned models across 4 social categories (race, gender, religion, health) in 21 stereotypes (such as race and criminality, race and weapons, gender and science, age and negativity). These prompt-based measures draw from psychology’s long history of research into measuring stereotypes based on purely observable behavior; they expose nuanced biases in proprietary value-aligned LLMs that appear unbiased according to standard benchmarks.

Significance

Modern large language models (LLMs) are designed to align with human values. They can appear unbiased on standard benchmarks, but we find that they still show widespread stereotype biases on two psychology-inspired measures. These measures allow us to measure biases in LLMs based on just their behavior, which is necessary as these models have become increasingly proprietary. We found pervasive stereotype biases mirroring those in society in 8 value-aligned models across 4 social categories (race, gender, religion, health) in 21 stereotypes (such as race and criminality, race and weapons, gender and science, age and negativity), also demonstrating sizable effects on discriminatory decisions. Given the growing use of these models, biases in their behavior can have significant consequences for human societies.

Here are some thoughts:

This research is important to psychologists because it highlights the parallels between implicit biases in humans and those that persist in large language models (LLMs), even when these models are explicitly aligned to be unbiased. By adapting psychological tools like the Implicit Association Test (IAT) and focusing on relative decision-making tasks, the study uncovers pervasive stereotype biases in LLMs across social categories such as race, gender, religion, and health—mirroring well-documented human biases. This insight is critical for psychologists studying bias formation, transmission, and mitigation, as it suggests that similar cognitive mechanisms might underlie both human and machine biases. Moreover, the findings raise ethical concerns about how these biases might influence real-world decisions made or supported by LLMs, emphasizing the need for continued scrutiny and development of more robust alignment techniques. The research also opens new avenues for understanding how biases evolve in artificial systems, offering a unique lens through which psychologists can explore the dynamics of stereotyping and discrimination in both human and machine contexts.

Sunday, May 4, 2025

Navigating LLM Ethics: Advancements, Challenges, and Future Directions

Jiao, J., Afroogh, S., Xu, Y., & Phillips, C. (2024).
arXiv (Cornell University).

Abstract

This study addresses ethical issues surrounding Large Language Models (LLMs) within the field of artificial intelligence. It explores the common ethical challenges posed by both LLMs and other AI systems, such as privacy and fairness, as well as ethical challenges uniquely arising from LLMs. It highlights challenges such as hallucination, verifiable accountability, and decoding censorship complexity, which are unique to LLMs and distinct from those encountered in traditional AI systems. The study underscores the need to tackle these complexities to ensure accountability, reduce biases, and enhance transparency in the influential role that LLMs play in shaping information dissemination. It proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration. It recommends ethical frameworks tailored to specific domains and dynamic auditing systems adapted to diverse contexts. This roadmap aims to guide responsible development and integration of LLMs, envisioning a future where ethical considerations govern AI advancements in society.

Here are some thoughts:

This study examines the ethical issues surrounding Large Language Models (LLMs) within artificial intelligence, addressing both common ethical challenges shared with other AI systems, such as privacy and fairness, and the unique ethical challenges specific to LLMs.  The authors emphasize the distinct challenges posed by LLMs, including hallucination, verifiable accountability, and the complexities of decoding censorship.  The research underscores the importance of tackling these complexities to ensure accountability, reduce biases, and enhance transparency in how LLMs shape information dissemination.  It also proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration, ethical frameworks tailored to specific domains, and dynamic auditing systems adapted to diverse contexts, ultimately aiming to guide the responsible development and integration of LLMs. 

Sunday, February 16, 2025

Humor as a window into generative AI bias

Saumure, R., De Freitas, J., & Puntoni, S. (2025).
Scientific Reports, 15(1).

Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

Here are some thoughts:

Here is a novel method developed to uncover biases in AI systems, revealing some unexpected results. The research highlights how AI models, despite their advanced capabilities, can exhibit biases that are not immediately apparent. The new approach involves probing the AI's decision-making processes to identify hidden prejudices, which can have significant implications for fairness and ethical AI deployment.

This research underscores a critical challenge in the field of artificial intelligence: ensuring that AI systems operate ethically and fairly. As AI becomes increasingly integrated into industries such as healthcare, finance, criminal justice, and hiring, the potential for biased decision-making poses significant risks. Biases in AI can perpetuate existing inequalities, reinforce stereotypes, and lead to unfair outcomes for individuals or groups. This study highlights the importance of prioritizing ethical AI development to build systems that are not only intelligent but also just and equitable.

To address these challenges, bias detection should become a standard practice in AI development workflows. The novel method introduced in this research provides a promising framework for identifying hidden biases, but it is only one piece of the puzzle. Organizations should integrate multiple bias detection techniques, encourage interdisciplinary collaboration, and leverage external audits to ensure their AI systems are as fair and transparent as possible.

Friday, February 7, 2025

Physicians’ ethical concerns about artificial intelligence in medicine: a qualitative study: “The final decision should rest with a human”

Kahraman, F.,  et al. (2024).
Frontiers in Public Health, 12.

Abstract

Background/aim: Artificial Intelligence (AI) is the capability of computational systems to perform tasks that require human-like cognitive functions, such as reasoning, learning, and decision-making. Unlike human intelligence, AI does not involve sentience or consciousness but focuses on data processing, pattern recognition, and prediction through algorithms and learned experiences. In healthcare including neuroscience, AI is valuable for improving prevention, diagnosis, prognosis, and surveillance.

Methods: This qualitative study aimed to investigate the acceptability of AI in Medicine (AIIM) and to elucidate any technical and scientific, as well as social and ethical issues involved. Twenty-five doctors from various specialties were carefully interviewed regarding their views, experience, knowledge, and attitude toward AI in healthcare.

Results: Content analysis confirmed the key ethical principles involved: confidentiality, beneficence, and non-maleficence. Honesty was the least invoked principle. A thematic analysis established four salient topic areas, i.e., advantages, risks, restrictions, and precautions. Alongside the advantages, there were many limitations and risks. The study revealed a perceived need for precautions to be embedded in healthcare policies to counter the risks discussed. These precautions need to be multi-dimensional.

Conclusion: The authors conclude that AI should be rationally guided, function transparently, and produce impartial results. It should assist human healthcare professionals collaboratively. This kind of AI will permit fairer, more innovative healthcare which benefits patients and society whilst preserving human dignity. It can foster accuracy and precision in medical practice and reduce the workload by assisting physicians during clinical tasks. AIIM that functions transparently and respects the public interest can be an inspiring scientific innovation for humanity.

Here are some thoughts:

The integration of Artificial Intelligence (AI) in healthcare presents a complex landscape of potential benefits and significant ethical concerns. On one hand, AI offers advantages such as error reduction, increased diagnostic speed, and the potential to alleviate the workload of healthcare professionals, allowing them more time for complex cases and patient interaction. These advancements could lead to improved patient outcomes and more efficient healthcare delivery.

However, ethical issues loom large. Privacy is a paramount concern, as the sensitive nature of patient data necessitates robust security measures to prevent misuse. The question of responsibility in AI-driven decision-making is also fraught with ambiguity, raising legal and ethical dilemmas about accountability in case of errors.

There is a legitimate fear of unemployment among healthcare professionals, though it is more about AI augmenting rather than replacing human capabilities. The human touch in medicine, encompassing empathy and trust-building, is irreplaceable and must be preserved.

Education and regulation are crucial for the ethical integration of AI. Healthcare professionals and patients need to understand AI's role and limitations, with clear rules to ensure ethical use. Bias in AI algorithms, potentially exacerbating health disparities, must be addressed through diverse development teams and continuous monitoring.

Transparency is essential for trust, with patients informed about AI's role in their care and doctors capable of explaining AI decisions. Legal implications, such as data ownership and patient consent, require policy attention.

Economically, AI could enhance healthcare efficiency, but its impact on costs and accessibility needs careful consideration. International collaboration is vital for uniform standards and fairness globally.

Friday, January 24, 2025

Ethical Considerations for Using AI to Predict Suicide Risk

Faith Wershba
The Hastings Center
Originally published 9 Dec 24

Those who have lost a friend or family member to suicide frequently express remorse that they did not see it coming. One often hears, “I wish I would have known” or “I wish I could have done something to help.” Suicide is one of the leading causes of death in the United States, and with suicide rates rising, the need for effective screening and prevention strategies is urgent.

Unfortunately, clinician judgement has not proven very reliable when it comes to predicting patients’ risk of attempting suicide. A 2016 meta-analysis from the American Psychological Association concluded that, on average, clinicans’ ability to predict suicide risk was no better than chance. Predicting suicide risk is a complex and high-stakes task, and while there are a number of known risk factors that correlate with suicide attempts at the population level, the presence or absence of a given risk factor may not reliably predict an individual’s risk of attempting suicide. Moreover, there are likely unknown risk factors that interact to modify risk. For these reasons, patients who qualify as high-risk may not be identified by existing assessments.

Can AI do better? Some researchers are trying to find out by turning towards big data and machine learning algorithms. These algorithms are trained on medical records from large cohorts of patients who have either attempted or committed suicide (“cases”) or who have never attempted suicide (“controls”). An algorithm combs through this data to identify patterns and extract features that correlate strongly with suicidality, updating itself continuously to increase predictive accuracy. Once the algorithm has been sufficiently trained and refined on test data, the hope is that it can be applied to predict suicide risk in individual patients.


Here are some thoughts:

The article explores the potential benefits and ethical challenges associated with leveraging artificial intelligence (AI) in suicide risk assessment. AI algorithms, which analyze extensive patient data to identify patterns indicating heightened suicide risk, hold promise for enhancing early intervention efforts. However, the integration of AI into clinical practice raises significant ethical and practical considerations that psychologists must navigate.

One critical concern is the accuracy and reliability of AI predictions. While AI has demonstrated potential in identifying suicide risk, its outputs are not infallible. Overreliance on AI without applying clinical judgment may result in false positives or negatives, potentially undermining the quality of care provided to patients. Psychologists must balance AI insights with their expertise to ensure accurate and ethical decision-making.

Informed consent and respect for patient autonomy are also paramount. Transparency about how AI tools are used and obtaining explicit consent from patients ensures trust and adherence to ethical principles. 

Bias and fairness represent another challenge, as AI algorithms can reflect biases present in the training data. These biases may lead to unequal treatment of different demographic groups, necessitating ongoing monitoring and adjustments to ensure equitable care. Furthermore, AI should be viewed as a tool to complement, not replace, the clinical judgment of psychologists. Integrating AI insights into a holistic approach to care is critical for addressing the complexities of suicide risk.

Finally, the use of AI raises questions about legal and ethical accountability. Determining responsibility for decisions influenced by AI predictions requires clear guidelines and policies. Psychologists must remain vigilant in ensuring that AI use aligns with both ethical standards and the best interests of their patients.

Wednesday, January 22, 2025

Cognitive biases and artificial intelligence.

Wang, J., & Redelmeier, D. A. (2024).
NEJM AI, 1(12).

Abstract

Generative artificial intelligence (AI) models are increasingly utilized for medical applications. We tested whether such models are prone to human-like cognitive biases when offering medical recommendations. We explored the performance of OpenAI generative pretrained transformer (GPT)-4 and Google Gemini-1.0-Pro with clinical cases that involved 10 cognitive biases and system prompts that created synthetic clinician respondents. Medical recommendations from generative AI were compared with strict axioms of rationality and prior results from clinicians. We found that significant discrepancies were apparent for most biases. For example, surgery was recommended more frequently for lung cancer when framed in survival rather than mortality statistics (framing effect: 75% vs. 12%; P<0.001). Similarly, pulmonary embolism was more likely to be listed in the differential diagnoses if the opening sentence mentioned hemoptysis rather than chronic obstructive pulmonary disease (primacy effect: 100% vs. 26%; P<0.001). In addition, the same emergency department treatment was more likely to be rated as inappropriate if the patient subsequently died rather than recovered (hindsight bias: 85% vs. 0%; P<0.001). One exception was base-rate neglect that showed no bias when interpreting a positive viral screening test (correction for false positives: 94% vs. 93%; P=0.431). The extent of these biases varied minimally with the characteristics of synthetic respondents, was generally larger than observed in prior research with practicing clinicians, and differed between generative AI models. We suggest that generative AI models display human-like cognitive biases and that the magnitude of bias can be larger than observed in practicing clinicians.

Here are some thoughts:

The research explores how AI systems, trained on human-generated data, often replicate cognitive biases such as confirmation bias, representation bias, and anchoring bias. These biases arise from flawed data, algorithmic design, and human interactions, resulting in inequitable outcomes in areas like recruitment, criminal justice, and healthcare. To address these challenges, the authors propose several strategies, including ensuring diverse and inclusive datasets, enhancing algorithmic transparency, fostering interdisciplinary collaboration among ethicists, developers, and legislators, and establishing regulatory frameworks that prioritize fairness, accountability, and privacy. They emphasize that while biases in AI reflect human cognitive tendencies, they have the potential to exacerbate societal inequalities if left unchecked. A holistic approach combining technological solutions with ethical and regulatory oversight is necessary to create AI systems that are equitable and socially beneficial.

This topic connects deeply to ethics, values, and psychology. Ethically, the replication of biases in AI challenges principles of fairness, justice, and equity, highlighting the need for responsible innovation that aligns AI systems with societal values to avoid perpetuating systemic discrimination. Psychologically, the biases in AI reflect human cognitive shortcuts, such as heuristics, which, while useful for individual decision-making, can lead to harmful outcomes when embedded into AI systems. By leveraging insights from psychology to identify and mitigate these biases, and grounding AI development in ethical principles, society can create technology that is both advanced and aligned with humanistic values.

Monday, August 12, 2024

Spain passes law allowing anyone over 16 to change registered gender

Sam Jones
The Irish Times
Originally posted 16 Feb 23

Spain’s parliament has approved new legislation that will allow anyone over 16 to change their legally registered gender, ease abortion limits for those aged 16 and 17 and make the country the first in Europe to introduce paid menstrual leave.

The new transgender law – which was passed despite protests from feminist groups, warnings from opposition parties, and amid tensions between different wings of the Socialist-led coalition government – means that anyone aged over 16 will be able to change their gender on official documents without medical supervision.

However, a judge will need to authorise the change for minors aged 12-14, while those aged 14-16 will need the consent of their parents or guardians. No such changes will be available to those under the age of 12.

The law will also see a ban on conversion therapy – punishable by hefty fines – and an end to public subsidies for groups that “incite or promote LGBTIphobia”.


Some thoughts:

Spain's transgender laws are important to know from a multicultural competence perspective.

Familiarity with such laws enhances our cultural competence, allowing us to better serve diverse populations, including transgender individuals from various backgrounds. Moreover, knowledge of pioneering laws like Spain's enables us to advocate for similar reforms in our own country, promoting inclusivity and human rights. Furthermore, understanding the legal recognition of transgender rights in countries like Spain encourages us to reflect on our own ethical practices, ensuring respect, empathy, and non-discrimination in our work.

Saturday, July 20, 2024

The Supreme Court upholds the conviction of woman who challenged expert testimony in a drug case

Lindsay Whitehurst
apnews.com
originally posted 20 June 24

The Supreme Court on Thursday upheld the conviction of a California woman who said she did not know about a stash of methamphetamine hidden inside her car.

In a ruling that crossed the court’s ideological lines, the 6-3 majority opinion dismissed arguments that an expert witness for the prosecution had gone too far in describing the woman’s mindset when he said that most larger scale drug couriers are aware of what they are transporting.

“An opinion about most couriers is not an opinion about all couriers,” said Justice Clarence Thomas, who wrote the decision. He was joined by fellow conservatives Chief Justice John Roberts, Justices Samuel Alito, Brett Kavanaugh and Amy Coney Barrett as well as liberal Justice Ketanji Brown Jackson.

In a sharp dissent, conservative Justice Neil Gorsuch wrote that the ruling gives the government a “powerful new tool in its pocket.”

“Prosecutors can now put an expert on the stand — someone who apparently has the convenient ability to read minds — and let him hold forth on what ‘most’ people like the defendant think when they commit a legally proscribed act. Then, the government need do no more than urge the jury to find that the defendant is like ‘most’ people and convict,” he wrote. Joining him were the court’s other liberal justices, Sonia Sotomayor and Elena Kagan.


Here are some thoughts:

The recent Supreme Court case involving a woman convicted of drug trafficking highlights a complex issue surrounding expert testimony, particularly for psychologists. In this case, the prosecution's expert offered an opinion on the general awareness of large-scale drug couriers, which the defense argued unfairly portrayed the defendant's mindset. While the Court allowed the testimony, it leaves some psychologists concerned.

The potential for expert testimony to blur the lines between general patterns and specific defendant behavior is a worry. Psychologists strive to present nuanced assessments based on individual cases. This ruling might incentivize broader generalizations, which could risk prejudicing juries against defendants. It's crucial to find a balance between allowing experts to provide helpful insights and ensuring they don't overstep into determining a defendant's guilt.

Moving forward, psychologists offering expert testimony may need to tread carefully.  They should ensure their testimony focuses on established psychological principles and avoids commenting on a specific defendant's knowledge or intent. This case underscores the importance of clear guidelines for expert witnesses to uphold the integrity of the justice system.

Friday, June 7, 2024

Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance

Dillion, D., Mondal, D., Tandon, N.,
& Gray, K. (2024, May 29).

Abstract

AI has demonstrated expertise across various fields, but its potential as a moral expert remains unclear. Recent work suggests that Large Language Models (LLMs) can reflect moral judgments with high accuracy. But as LLMs are increasingly used in complex decision-making roles, true moral expertise requires not just aligned judgments but also clear and trustworthy moral reasoning. Here, we advance work on the Moral Turing Test and find that advice from GPT-4o is rated as more moral, trustworthy, thoughtful, and correct than that of the popular The New York Times advice column, The Ethicist. GPT models outperformed both a representative sample of Americans and a renowned ethicist in providing moral explanations and advice, suggesting that LLMs have, in some respects, achieved a level of moral expertise. The present work highlights the importance of carefully programming ethical guidelines in LLMs, considering their potential to sway users' moral reasoning. More promisingly, it suggests that LLMs could complement human expertise in moral guidance and decision-making.


Here are my thoughts:

This research on GPT-4o's moral reasoning is fascinating, but caution is warranted. While exceeding human performance in explanations and perceived trustworthiness is impressive, true moral expertise goes beyond these initial results.

Here's why:

First, there are nuances to all moral dilemmas. Real-world dilemmas often lack clear-cut answers. Can GPT-4o navigate the gray areas and complexities of human experience?

Next, everyone has a rich experience, values, perspectives, and biases.  What ethical framework guides GPT-4o's decisions? Transparency in its programming is crucial.

Finally, the consequences of AI-driven moral advice can be far-reaching. Careful evaluation of potential biases and unintended outcomes is essential.  There is no objective algorithm.  There is no objective morality.  All moral decisions, no matter how well-reasoned, have pluses and minuses.  Therefore, AI can be used as a starting point for decision-making and planning.

Saturday, June 1, 2024

Political ideology and environmentalism impair logical reasoning

Keller, L., Hazelaar, F., et al. (2023).
Thinking & Reasoning, 1–30.

Abstract

People are more likely to think statements are valid when they agree with them than when they do not. We conducted four studies analyzing the interference of self-reported ideologies with performance in a syllogistic reasoning task. Study 1 established the task paradigm and demonstrated that participants’ political ideology affects syllogistic reasoning for syllogisms with political content but not politically irrelevant syllogisms. The preregistered Study 2 replicated the effect and showed that incentivizing accuracy did not alleviate these differences. Study 3 revealed that syllogistic reasoning is affected by ideology in the presence and absence of such bonus payments for correctly judging the conclusions’ logical validity. In Study 4, we observed similar effects regarding a different ideological orientation: environmentalism. Again, monetary bonuses did not attenuate these effects. Taken together, the results of four studies highlight the harm of ideology regarding people’s logical reasoning.


Here is my summary:

The research explores how pre-existing ideologies, both political and environmental, can influence how people evaluate logical arguments.  The findings suggest that people are more likely to judge arguments as valid if they align with their existing beliefs, regardless of the argument's actual logical structure. This bias was observed for both liberals and conservatives, and for those with strong environmental convictions. Offering financial rewards for accurate reasoning didn't eliminate this effect.

Monday, May 13, 2024

Ethical Considerations When Confronted by Racist Patients

Charles Dike
Psychiatric News
Originally published 26 Feb 24

Here is an excerpt:

Abuse of psychiatrists, mostly verbal but sometimes physical, is common in psychiatric treatment, especially on inpatient units. For psychiatrists trained decades ago, experiencing verbal abuse and name calling from patients—and even senior colleagues and teachers—was the norm. The abuse began in medical school, with unconscionable work hours followed by callous disregard of students’ concerns and disparaging statements suggesting the students were too weak or unfit to be doctors.

This abuse continued into specialty training and practice. It was largely seen as a necessary evil of attaining the privilege of becoming a doctor and treating patients whose uncivil behaviors can be excused on account of their ill health. Doctors were supposed to rise above those indignities, focus on the task at hand, and get the patients better in line with our core ethical principles that place caring for the patient above all else. There was no room for discussion or acknowledgement of the doctors’ underlying life experiences, including past trauma, and how patients’ behavior would affect doctors.

Moreover, even in recent times, racial slurs or attacks against physicians of color were not recognized as abuse by the dominant group of doctors; the affected physicians who complained were dismissed as being too sensitive or worse. Some physicians, often not of color, have explained a manic patient’s racist comments as understandable in the context of disinhibition and poor judgment, which are cardinal symptoms of mania, and they are surprised that physicians of color are not so understanding.


Here is a summary:

This article explores the ethical dilemma healthcare providers face when treating patients who express racist views. It acknowledges the provider's obligation to care for the patient's medical needs, while also considering the emotional toll of racist remarks on both the provider and other staff members.

The article discusses the importance of assessing the urgency of the patient's medical condition and their mental capacity. It explores the option of setting boundaries or termination of treatment in extreme cases, while also acknowledging the potential benefits of attempting a dialogue about the impact of prejudice.

Friday, April 5, 2024

Ageism in health care is more common than you might think, and it can harm people

Ashley Milne-Tyte
npr.org
Originally posted 7 March 24

A recent study found that older people spend an average of 21 days a year on medical appointments. Kathleen Hayes can believe it.

Hayes lives in Chicago and has spent a lot of time lately taking her parents, who are both in their 80s, to doctor's appointments. Her dad has Parkinson's, and her mom has had a difficult recovery from a bad bout of Covid-19. As she's sat in, Hayes has noticed some health care workers talk to her parents at top volume, to the point, she says, "that my father said to one, 'I'm not deaf, you don't have to yell.'"

In addition, while some doctors and nurses address her parents directly, others keep looking at Hayes herself.

"Their gaze is on me so long that it starts to feel like we're talking around my parents," says Hayes, who lives a few hours north of her parents. "I've had to emphasize, 'I don't want to speak for my mother. Please ask my mother that question.'"

Researchers and geriatricians say that instances like these constitute ageism – discrimination based on a person's age – and it is surprisingly common in health care settings. It can lead to both overtreatment and undertreatment of older adults, says Dr. Louise Aronson, a geriatrician and professor of geriatrics at the University of California, San Francisco.

"We all see older people differently. Ageism is a cross-cultural reality," Aronson says.


Here is my summary:

This article and other research point to a concerning prevalence of ageism in healthcare settings. This bias can take the form of either overtreatment or undertreatment of older adults.

Negative stereotypes: Doctors may hold assumptions about older adults being less willing or able to handle aggressive treatments, leading to missed opportunities for care.

Communication issues: Sometimes healthcare providers speak to adult children instead of the older person themselves, disregarding their autonomy.

These biases are linked to poorer health outcomes and can even shorten lifespans.  The article cites a study suggesting that ageism costs the healthcare system billions of dollars annually.  There are positive steps that can be taken, such as anti-bias training for healthcare workers.

Sunday, March 24, 2024

From a Psych Hospital to Harvard Law: One Black Woman’s Journey With Bipolar Disorder

Krista L. R. Cezair
Ms. Magazine
Originally posted 22 Feb 24

Here is an excerpt:

In the spring of 2018, I was so sick that I simply couldn’t consider my future performance on the bar exam. I desperately needed help. I had very little insight into my condition and had to be involuntarily hospitalized twice. I also had to make the decision of which law school to attend between trips to the psych ward while ragingly manic. I relied on my mother and a former professor who essentially told me I would be attending Harvard. Knowing my reduced capacity for decision‐making while manic, I did not put up a fight and informed Harvard that I would be attending. The next question was: When? Everyone in my community supported me in my decision to defer law school for a year to give myself time to recover—but would Harvard do the same?

Luckily, the answer was yes, and that fall, the fall of 2018, as my admitted class began school, I was admitted to the hospital again, for bipolar depression this time.

While there, I roomed with a sweet young woman of color who was diagnosed with schizophrenia, bipolar disorder and PTSD and was pregnant with her second child. She was unhoused and had nowhere to go should she be discharged from the hospital, which the hospital threatened to do because she refused medication. She worried that the drugs would harm her unborn child. She was out of options, and the hospital was firm. She was released before me. I wondered where she would go. She had expressed to me multiple times that she had nowhere to go, not her parents’ house, not the child’s father’s house, nowhere.

It was then that I decided I had to fight—for her and for myself. I had access to resources she couldn’t dream of, least of all shelter and a support system. I had to use these resources to get better and embark on a career that would make life better for people like her, like us.

After getting out of the hospital, I started to improve, and I could tell the depression was lifting. Unfortunately, a rockier rock bottom lay ahead of me as I started to feel too good, and the depression lifted too high. Recovery is not linear, and it seemed I was manic again.


Here are some thoughts:

In this powerful piece, Krista L. R. Cezair candidly shares her journey navigating bipolar disorder while achieving remarkable academic and professional success. She begins by describing her history of depression and suicidal thoughts, highlighting the pivotal moment of diagnosis and the challenges within mental health care facilities, particularly for marginalized groups. Cezair eloquently connects her personal experience with broader issues of systemic bias and lack of understanding around mental health, especially within prestigious institutions like Harvard Law School. Her article advocates for destigmatizing mental health struggles and recognizing the resilience and contributions of those living with mental illness.

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),
1625–1650.

Abstract

Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

Implications:
  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Monday, March 11, 2024

Why People Fail to Notice Horrors Around Them

Tali Sharot and Cass R. Sunstein
The New York Times
Originally posted 25 Feb 24

The miraculous history of our species is peppered with dark stories of oppression, tyranny, bloody wars, savagery, murder and genocide. When looking back, we are often baffled and ask: Why weren't the horrors halted earlier? How could people have lived with them?

The full picture is immensely complicated. But a significant part of it points to the rules that govern the operations of the human brain.

Extreme political movements, as well as deadly conflicts, often escalate slowly. When threats start small and increase gradually, they end up eliciting a weaker emotional reaction, less resistance and more acceptance than they would otherwise. The slow increase allows larger and larger horrors to play out in broad daylight- taken for granted, seen as ordinary.

One of us is a neuroscientist; the other is a law professor. From our different fields, we have come to believe that it is not possible to understand the current period - and the shifts in what counts as normal - without appreciating why and how people do not notice so much of what we live with.

The underlying reason is a pivotal biological feature of our brain: habituation, or our tendency to respond less and less to things that are constant or that change slowly. You enter a cafe filled with the smell of coffee and at first the smell is overwhelming, but no more than 20 minutes go by and you cannot smell it any longer. This is because your olfactory neurons stop firing in response to a now-familiar odor.

Similarly, you stop hearing the persistent buzz of an air-conditioner because your brain filters out background noise. Your brain cares about what recently changed, not about what remained the same.
Habituation is one of our most basic biological characteristics - something that we two-legged, bigheaded creatures share with other animals on earth, including apes, elephants, dogs, birds, frogs, fish and rats. Human beings also habituate to complex social circumstances such as war, corruption, discrimination, oppression, widespread misinformation and extremism. Habituation does not only result in a reduced tendency to notice and react to grossly immoral deeds around us; it also increases the likelihood that we will engage in them ourselves.


Here is my summary:

From a psychological perspective, the failure to notice horrors around us can be attributed to cognitive biases and the human tendency to see reality in predictable yet flawed ways. This phenomenon is linked to how individuals perceive and value certain aspects of their environment. Personal values play a crucial role in shaping our perceptions and emotional responses. When there is a discrepancy between our self-perception and reality, it can lead to various troubles as our values define us and influence how we react to events. Additionally, the concept of safety needs is highlighted as a mediating factor in mental disorders induced by stressful events. The unexpected nature of events can trigger fear and anger, while the anticipation of events can induce calmness. This interplay between safety needs, emotions, and pathological conditions underscores how individuals react to perceived threats and unexpected situations, impacting their mental well-being

Thursday, February 22, 2024

Rising Suicide Rate Among Hispanics Worries Community Leaders

A. Miller and M. C. Work
KFF Health News
Originally posted 22 Jan 24

Here is an excerpt:

The suicide rate for Hispanic people in the United States has increased significantly over the past decade. The trend has community leaders worried: Even elementary school-aged Hispanic children have tried to harm themselves or expressed suicidal thoughts.

Community leaders and mental health researchers say the pandemic hit young Hispanics especially hard. Immigrant children are often expected to take more responsibility when their parents don’t speak English ― even if they themselves aren’t fluent. Many live in poorer households with some or all family members without legal residency. And cultural barriers and language may prevent many from seeking care in a mental health system that already has spotty access to services.

“Being able to talk about painful things in a language that you are comfortable with is a really specific type of healing,” said Alejandra Vargas, a bilingual Spanish program coordinator for the Suicide Prevention Center at Didi Hirsch Mental Health Services in Los Angeles.

“When we answer the calls in Spanish, you can hear that relief on the other end,” she said. “That, ‘Yes, they’re going to understand me.’”

The Centers for Disease Control and Prevention’s provisional data for 2022 shows a record high of nearly 50,000 suicide deaths for all racial and ethnic groups.

Grim statistics from KFF show that the rise in the suicide death rate has been more pronounced among communities of color: From 2011 to 2021, the suicide rate among Hispanics jumped from 5.7 per 100,000 people to 7.9 per 100,000, according to the data.

For Hispanic children 12 and younger, the rate increased 92.3% from 2010 to 2019, according to a study published in the Journal of Community Health.

Wednesday, February 21, 2024

Ethics Ratings of Nearly All Professions Down in U.S.

M. Brenan and J. M. Jones
gallup.com
Originally posted 22 Jan 24

Here is an excerpt:

New Lows for Five Professions; Three Others Tie Their Lows

Ethics ratings for five professions hit new lows this year, including members of Congress (6%), senators (8%), journalists (19%), clergy (32%) and pharmacists (55%).

Meanwhile, the ratings of bankers (19%), business executives (12%) and college teachers (42%) tie their previous low points. Bankers’ and business executives’ ratings were last this low in 2009, just after the Great Recession. College teachers have not been viewed this poorly since 1977.

College Graduates Tend to View Professions More Positively

About half of the 23 professions included in the 2023 survey show meaningful differences by education level, with college graduates giving a more positive honesty and ethics rating than non-college graduates in each case. Almost all of the 11 professions showing education differences are performed by people with a bachelor’s degree, if not a postgraduate education.

The largest education differences are seen in ratings of dentists and engineers, with roughly seven in 10 college graduates rating those professions’ honesty and ethical standards highly, compared with slightly more than half of non-graduates.

Ratings of psychiatrists, college teachers and pharmacists show nearly as large educational differences, ranging from 14 to 16 points, while doctors, nurses and veterinarians also show double-digit education gaps.

These educational differences have been consistent in prior years’ surveys.

Adults without a college degree rate lawyers’ honesty and ethics slightly better than college graduates in the latest survey, 18% to 13%, respectively. While this difference is not statistically significant, in prior years non-college graduates have rated lawyers more highly by significant margins.

Partisans’ Ratings of College Teachers Differ Most    
                
Republicans and Democrats have different views of professions, with Democrats tending to be more complimentary of workers’ honesty and ethical standards than Republicans are. In fact, police officers are the only profession with higher honesty and ethics ratings among Republicans and Republican-leaning independents (55%) than among Democrats and Democratic-leaning independents (37%).

The largest party differences are seen in evaluations of college teachers, with a 40-point gap (62% among Democrats/Democratic leaners and 22% among Republicans/Republican leaners). Partisans’ honesty and ethics ratings of psychiatrists, journalists and labor union leaders differ by 20 points or more, while there is a 19-point difference for medical doctors.

Friday, February 16, 2024

Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms

B. Kuehn
JAMA
Published Online: January 17, 2024.
doi:10.1001/jama.2023.25530

Here is an excerpt:

The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between Black and White people to justify slavery, the CMSS report notes. For example, the third US president, Thomas Jefferson, claimed that Black people had less kidney output, more heat tolerance, and poorer lung function than White individuals. Louisiana physician Samuel Cartwright, MD, subsequently rationalized hard labor as a way for slaves to fortify their lungs. Over time, the report explains, the medical literature echoed some of those ideas, which have been used in ways that cause harm.

“It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” Burstin said.

Renewed recognition of these harmful legacies and growing evidence of the potential harm caused by structural racism, bias, and discrimination in medicine have led to reconsideration of the use of race in clinical algorithms. The reckoning with racial injustice sparked by the May 2020 murder of George Floyd helped accelerate this work. A few weeks after Floyd’s death, an editorial in the New England Journal of Medicine recommended reconsidering race in 13 clinical algorithms, echoing a growing chorus of medical students and physicians arguing for change.

Congress also got involved. As a Robert Wood Johnson Foundation Health Policy Fellow, Michelle Morse, MD, MPH, raised concerns about the use of race in clinical algorithms to US Rep Richard Neal (D, MA), then chairman of the House Ways and Means Committee. Neal in September 2020 sent letters to several medical societies asking them to assess racial bias and a year later he and his colleagues issued a report on the misuse of race in clinical decision-making tools.

“We need to have more humility in medicine about the ways in which our history as a discipline has actually held back health equity and racial justice,” Morse said in an interview. “The issue of racism and clinical algorithms is one really tangible example of that.”


My summary: There's increasing worry that using race in clinical algorithms can be harmful and perpetuate racial disparities in healthcare. This concern stems from a recognition of the historical harms of racism in medicine and growing evidence of bias in algorithms.

A review commissioned by the Agency for Healthcare Research and Quality (AHRQ) found that using race in algorithms can exacerbate health disparities and reinforce the false idea that race is a biological factor.

Several medical organizations and experts have called for reevaluating the use of race in clinical algorithms. Some argue that race should be removed altogether, while others advocate for using it only in specific cases where it can be clearly shown to improve outcomes without causing harm.