Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Biases. Show all posts
Showing posts with label Biases. Show all posts

Friday, April 5, 2024

Ageism in health care is more common than you might think, and it can harm people

Ashley Milne-Tyte
Originally posted 7 March 24

A recent study found that older people spend an average of 21 days a year on medical appointments. Kathleen Hayes can believe it.

Hayes lives in Chicago and has spent a lot of time lately taking her parents, who are both in their 80s, to doctor's appointments. Her dad has Parkinson's, and her mom has had a difficult recovery from a bad bout of Covid-19. As she's sat in, Hayes has noticed some health care workers talk to her parents at top volume, to the point, she says, "that my father said to one, 'I'm not deaf, you don't have to yell.'"

In addition, while some doctors and nurses address her parents directly, others keep looking at Hayes herself.

"Their gaze is on me so long that it starts to feel like we're talking around my parents," says Hayes, who lives a few hours north of her parents. "I've had to emphasize, 'I don't want to speak for my mother. Please ask my mother that question.'"

Researchers and geriatricians say that instances like these constitute ageism – discrimination based on a person's age – and it is surprisingly common in health care settings. It can lead to both overtreatment and undertreatment of older adults, says Dr. Louise Aronson, a geriatrician and professor of geriatrics at the University of California, San Francisco.

"We all see older people differently. Ageism is a cross-cultural reality," Aronson says.

Here is my summary:

This article and other research point to a concerning prevalence of ageism in healthcare settings. This bias can take the form of either overtreatment or undertreatment of older adults.

Negative stereotypes: Doctors may hold assumptions about older adults being less willing or able to handle aggressive treatments, leading to missed opportunities for care.

Communication issues: Sometimes healthcare providers speak to adult children instead of the older person themselves, disregarding their autonomy.

These biases are linked to poorer health outcomes and can even shorten lifespans.  The article cites a study suggesting that ageism costs the healthcare system billions of dollars annually.  There are positive steps that can be taken, such as anti-bias training for healthcare workers.

Sunday, March 24, 2024

From a Psych Hospital to Harvard Law: One Black Woman’s Journey With Bipolar Disorder

Krista L. R. Cezair
Ms. Magazine
Originally posted 22 Feb 24

Here is an excerpt:

In the spring of 2018, I was so sick that I simply couldn’t consider my future performance on the bar exam. I desperately needed help. I had very little insight into my condition and had to be involuntarily hospitalized twice. I also had to make the decision of which law school to attend between trips to the psych ward while ragingly manic. I relied on my mother and a former professor who essentially told me I would be attending Harvard. Knowing my reduced capacity for decision‐making while manic, I did not put up a fight and informed Harvard that I would be attending. The next question was: When? Everyone in my community supported me in my decision to defer law school for a year to give myself time to recover—but would Harvard do the same?

Luckily, the answer was yes, and that fall, the fall of 2018, as my admitted class began school, I was admitted to the hospital again, for bipolar depression this time.

While there, I roomed with a sweet young woman of color who was diagnosed with schizophrenia, bipolar disorder and PTSD and was pregnant with her second child. She was unhoused and had nowhere to go should she be discharged from the hospital, which the hospital threatened to do because she refused medication. She worried that the drugs would harm her unborn child. She was out of options, and the hospital was firm. She was released before me. I wondered where she would go. She had expressed to me multiple times that she had nowhere to go, not her parents’ house, not the child’s father’s house, nowhere.

It was then that I decided I had to fight—for her and for myself. I had access to resources she couldn’t dream of, least of all shelter and a support system. I had to use these resources to get better and embark on a career that would make life better for people like her, like us.

After getting out of the hospital, I started to improve, and I could tell the depression was lifting. Unfortunately, a rockier rock bottom lay ahead of me as I started to feel too good, and the depression lifted too high. Recovery is not linear, and it seemed I was manic again.

Here are some thoughts:

In this powerful piece, Krista L. R. Cezair candidly shares her journey navigating bipolar disorder while achieving remarkable academic and professional success. She begins by describing her history of depression and suicidal thoughts, highlighting the pivotal moment of diagnosis and the challenges within mental health care facilities, particularly for marginalized groups. Cezair eloquently connects her personal experience with broader issues of systemic bias and lack of understanding around mental health, especially within prestigious institutions like Harvard Law School. Her article advocates for destigmatizing mental health struggles and recognizing the resilience and contributions of those living with mental illness.

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),


Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Monday, March 11, 2024

Why People Fail to Notice Horrors Around Them

Tali Sharot and Cass R. Sunstein
The New York Times
Originally posted 25 Feb 24

The miraculous history of our species is peppered with dark stories of oppression, tyranny, bloody wars, savagery, murder and genocide. When looking back, we are often baffled and ask: Why weren't the horrors halted earlier? How could people have lived with them?

The full picture is immensely complicated. But a significant part of it points to the rules that govern the operations of the human brain.

Extreme political movements, as well as deadly conflicts, often escalate slowly. When threats start small and increase gradually, they end up eliciting a weaker emotional reaction, less resistance and more acceptance than they would otherwise. The slow increase allows larger and larger horrors to play out in broad daylight- taken for granted, seen as ordinary.

One of us is a neuroscientist; the other is a law professor. From our different fields, we have come to believe that it is not possible to understand the current period - and the shifts in what counts as normal - without appreciating why and how people do not notice so much of what we live with.

The underlying reason is a pivotal biological feature of our brain: habituation, or our tendency to respond less and less to things that are constant or that change slowly. You enter a cafe filled with the smell of coffee and at first the smell is overwhelming, but no more than 20 minutes go by and you cannot smell it any longer. This is because your olfactory neurons stop firing in response to a now-familiar odor.

Similarly, you stop hearing the persistent buzz of an air-conditioner because your brain filters out background noise. Your brain cares about what recently changed, not about what remained the same.
Habituation is one of our most basic biological characteristics - something that we two-legged, bigheaded creatures share with other animals on earth, including apes, elephants, dogs, birds, frogs, fish and rats. Human beings also habituate to complex social circumstances such as war, corruption, discrimination, oppression, widespread misinformation and extremism. Habituation does not only result in a reduced tendency to notice and react to grossly immoral deeds around us; it also increases the likelihood that we will engage in them ourselves.

Here is my summary:

From a psychological perspective, the failure to notice horrors around us can be attributed to cognitive biases and the human tendency to see reality in predictable yet flawed ways. This phenomenon is linked to how individuals perceive and value certain aspects of their environment. Personal values play a crucial role in shaping our perceptions and emotional responses. When there is a discrepancy between our self-perception and reality, it can lead to various troubles as our values define us and influence how we react to events. Additionally, the concept of safety needs is highlighted as a mediating factor in mental disorders induced by stressful events. The unexpected nature of events can trigger fear and anger, while the anticipation of events can induce calmness. This interplay between safety needs, emotions, and pathological conditions underscores how individuals react to perceived threats and unexpected situations, impacting their mental well-being

Thursday, February 22, 2024

Rising Suicide Rate Among Hispanics Worries Community Leaders

A. Miller and M. C. Work
KFF Health News
Originally posted 22 Jan 24

Here is an excerpt:

The suicide rate for Hispanic people in the United States has increased significantly over the past decade. The trend has community leaders worried: Even elementary school-aged Hispanic children have tried to harm themselves or expressed suicidal thoughts.

Community leaders and mental health researchers say the pandemic hit young Hispanics especially hard. Immigrant children are often expected to take more responsibility when their parents don’t speak English ― even if they themselves aren’t fluent. Many live in poorer households with some or all family members without legal residency. And cultural barriers and language may prevent many from seeking care in a mental health system that already has spotty access to services.

“Being able to talk about painful things in a language that you are comfortable with is a really specific type of healing,” said Alejandra Vargas, a bilingual Spanish program coordinator for the Suicide Prevention Center at Didi Hirsch Mental Health Services in Los Angeles.

“When we answer the calls in Spanish, you can hear that relief on the other end,” she said. “That, ‘Yes, they’re going to understand me.’”

The Centers for Disease Control and Prevention’s provisional data for 2022 shows a record high of nearly 50,000 suicide deaths for all racial and ethnic groups.

Grim statistics from KFF show that the rise in the suicide death rate has been more pronounced among communities of color: From 2011 to 2021, the suicide rate among Hispanics jumped from 5.7 per 100,000 people to 7.9 per 100,000, according to the data.

For Hispanic children 12 and younger, the rate increased 92.3% from 2010 to 2019, according to a study published in the Journal of Community Health.

Wednesday, February 21, 2024

Ethics Ratings of Nearly All Professions Down in U.S.

M. Brenan and J. M. Jones
Originally posted 22 Jan 24

Here is an excerpt:

New Lows for Five Professions; Three Others Tie Their Lows

Ethics ratings for five professions hit new lows this year, including members of Congress (6%), senators (8%), journalists (19%), clergy (32%) and pharmacists (55%).

Meanwhile, the ratings of bankers (19%), business executives (12%) and college teachers (42%) tie their previous low points. Bankers’ and business executives’ ratings were last this low in 2009, just after the Great Recession. College teachers have not been viewed this poorly since 1977.

College Graduates Tend to View Professions More Positively

About half of the 23 professions included in the 2023 survey show meaningful differences by education level, with college graduates giving a more positive honesty and ethics rating than non-college graduates in each case. Almost all of the 11 professions showing education differences are performed by people with a bachelor’s degree, if not a postgraduate education.

The largest education differences are seen in ratings of dentists and engineers, with roughly seven in 10 college graduates rating those professions’ honesty and ethical standards highly, compared with slightly more than half of non-graduates.

Ratings of psychiatrists, college teachers and pharmacists show nearly as large educational differences, ranging from 14 to 16 points, while doctors, nurses and veterinarians also show double-digit education gaps.

These educational differences have been consistent in prior years’ surveys.

Adults without a college degree rate lawyers’ honesty and ethics slightly better than college graduates in the latest survey, 18% to 13%, respectively. While this difference is not statistically significant, in prior years non-college graduates have rated lawyers more highly by significant margins.

Partisans’ Ratings of College Teachers Differ Most    
Republicans and Democrats have different views of professions, with Democrats tending to be more complimentary of workers’ honesty and ethical standards than Republicans are. In fact, police officers are the only profession with higher honesty and ethics ratings among Republicans and Republican-leaning independents (55%) than among Democrats and Democratic-leaning independents (37%).

The largest party differences are seen in evaluations of college teachers, with a 40-point gap (62% among Democrats/Democratic leaners and 22% among Republicans/Republican leaners). Partisans’ honesty and ethics ratings of psychiatrists, journalists and labor union leaders differ by 20 points or more, while there is a 19-point difference for medical doctors.

Friday, February 16, 2024

Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms

B. Kuehn
Published Online: January 17, 2024.

Here is an excerpt:

The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between Black and White people to justify slavery, the CMSS report notes. For example, the third US president, Thomas Jefferson, claimed that Black people had less kidney output, more heat tolerance, and poorer lung function than White individuals. Louisiana physician Samuel Cartwright, MD, subsequently rationalized hard labor as a way for slaves to fortify their lungs. Over time, the report explains, the medical literature echoed some of those ideas, which have been used in ways that cause harm.

“It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” Burstin said.

Renewed recognition of these harmful legacies and growing evidence of the potential harm caused by structural racism, bias, and discrimination in medicine have led to reconsideration of the use of race in clinical algorithms. The reckoning with racial injustice sparked by the May 2020 murder of George Floyd helped accelerate this work. A few weeks after Floyd’s death, an editorial in the New England Journal of Medicine recommended reconsidering race in 13 clinical algorithms, echoing a growing chorus of medical students and physicians arguing for change.

Congress also got involved. As a Robert Wood Johnson Foundation Health Policy Fellow, Michelle Morse, MD, MPH, raised concerns about the use of race in clinical algorithms to US Rep Richard Neal (D, MA), then chairman of the House Ways and Means Committee. Neal in September 2020 sent letters to several medical societies asking them to assess racial bias and a year later he and his colleagues issued a report on the misuse of race in clinical decision-making tools.

“We need to have more humility in medicine about the ways in which our history as a discipline has actually held back health equity and racial justice,” Morse said in an interview. “The issue of racism and clinical algorithms is one really tangible example of that.”

My summary: There's increasing worry that using race in clinical algorithms can be harmful and perpetuate racial disparities in healthcare. This concern stems from a recognition of the historical harms of racism in medicine and growing evidence of bias in algorithms.

A review commissioned by the Agency for Healthcare Research and Quality (AHRQ) found that using race in algorithms can exacerbate health disparities and reinforce the false idea that race is a biological factor.

Several medical organizations and experts have called for reevaluating the use of race in clinical algorithms. Some argue that race should be removed altogether, while others advocate for using it only in specific cases where it can be clearly shown to improve outcomes without causing harm.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.



Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.


Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.


We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.


Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.


The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350


AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.

Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Sunday, January 21, 2024

Doctors With Histories of Big Malpractice Settlements Now Work for Insurers

P. Rucker, D. Armstrong, & D. Burke
Originally published 15 Dec 23

Here is an excerpt:

Patients and the doctors who treat them don’t get to pick which medical director reviews their case. An anesthesiologist working for an insurer can overrule a patient’s oncologist. In other cases, the medical director might be a doctor like Kasemsap who has left clinical practice after multiple accusations of negligence.

As part of a yearlong series about how health plans refuse to pay for care, ProPublica and The Capitol Forum set out to examine who insurers picked for such important jobs.

Reporters could not find any comprehensive database of doctors working for insurance companies or any public listings by the insurers who employ them. Many health plans also farm out medical reviews to other companies that employ their own doctors. ProPublica and The Capitol Forum identified medical directors through regulatory filings, LinkedIn profiles, lawsuits and interviews with insurance industry insiders. Reporters then checked those names against malpractice databases, state licensing board actions and court filings in 17 states.

Among the findings: The Capitol Forum and ProPublica identified 12 insurance company doctors with either a history of multiple malpractice payments, a single payment in excess of $1 million or a disciplinary action by a state medical board.

One medical director settled malpractice cases with 11 patients, some of whom alleged he bungled their urology surgeries and left them incontinent. Another was reprimanded by a state medical board for behavior that it found to be deceptive and dishonest. A third settled a malpractice case for $1.8 million after failing to identify cancerous cells on a pathology slide, which delayed a diagnosis for a 27-year-old mother of two, who died less than a year after her cancer was finally discovered.

None of this would have been easily visible to patients seeking approvals for care or payment from insurers who relied on these medical directors.

The ethical implications in this article are staggering.  Here are some quick points:

Conflicted Care: In a concerning trend, some US insurers are employing doctors with past malpractice settlements to assess whether patients deserve coverage for recommended treatments.  So, do these still licensed reviewers actually understand best practices?

Financial Bias: Critics fear these doctors, having faced financial repercussions for past care decisions, might prioritize minimizing payouts over patient needs, potentially leading to denied claims and delayed care.  In other words, do the reviewers have an inherent bias against patients, given that former patients complained against them?

Transparency Concerns: The lack of clear disclosure about these doctors' backgrounds raises concerns about transparency and potential conflicts of interest within the healthcare system.

In essence, this is a horrible system to provide high quality medical review.

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.


Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Saturday, January 6, 2024

Worth the Risk? Greater Acceptance of Instrumental Harm Befalling Men than Women

Graso, M., Reynolds, T. & Aquino, K.
Arch Sex Behav 52, 2433–2445 (2023).


Scientific and organizational interventions often involve trade-offs whereby they benefit some but entail costs to others (i.e., instrumental harm; IH). We hypothesized that the gender of the persons incurring those costs would influence intervention endorsement, such that people would more readily support interventions inflicting IH onto men than onto women. We also hypothesized that women would exhibit greater asymmetries in their acceptance of IH to men versus women. Three experimental studies (two pre-registered) tested these hypotheses. Studies 1 and 2 granted support for these predictions using a variety of interventions and contexts. Study 3 tested a possible boundary condition of these asymmetries using contexts in which women have traditionally been expected to sacrifice more than men: caring for infants, children, the elderly, and the ill. Even in these traditionally female contexts, participants still more readily accepted IH to men than women. Findings indicate people (especially women) are less willing to accept instrumental harm befalling women (vs. men). We discuss the theoretical and practical implications and limitations of our findings.

Here is my summary:

This research investigated the societal acceptance of "instrumental harm" (IH) based on the gender of the person experiencing it. Three studies found that people are more likely to tolerate IH when it happens to men than when it happens to women. This bias is especially pronounced among women and those holding egalitarian or feminist beliefs. Even in contexts traditionally associated with women's vulnerability, IH inflicted on men is seen as more acceptable.

These findings highlight a potential blind spot in our perception of harm and raise concerns about how policies might be influenced by this bias. Further research is needed to understand the underlying reasons for this bias and develop strategies to address it.

Wednesday, December 27, 2023

This algorithm could predict your health, income, and chance of premature death

Holly Barker
Originally published 18 DEC 23

Here is an excerpt:

The researchers trained the model, called “life2vec,” on every individual’s life story between 2008 to 2016, and the model sought patterns in these stories. Next, they used the algorithm to predict whether someone on the Danish national registers had died by 2020.

The model’s predictions were accurate 78% of the time. It identified several factors that favored a greater risk of premature death, including having a low income, having a mental health diagnosis, and being male. The model’s misses were typically caused by accidents or heart attacks, which are difficult to predict.

Although the results are intriguing—if a bit grim—some scientists caution that the patterns might not hold true for non-Danish populations. “It would be fascinating to see the model adapted using cohort data from other countries, potentially unveiling universal patterns, or highlighting unique cultural nuances,” says Youyou Wu, a psychologist at University College London.

Biases in the data could also confound its predictions, she adds. (The overdiagnosis of schizophrenia among Black people could cause algorithms to mistakenly label them at a higher risk of premature death, for example.) That could have ramifications for things such as insurance premiums or hiring decisions, Wu adds.

Here is my summary:

A new algorithm, trained on a mountain of Danish life stories, can peer into your future with unsettling precision. It can predict your health, income, and even your odds of an early demise. This, achieved by analyzing the sequence of life events, like getting a job or falling ill, raises both possibilities and ethical concerns.

On one hand, imagine the potential for good: nudges towards healthier habits or financial foresight, tailored to your personal narrative. On the other, anxieties around bias and discrimination loom. We must ensure this powerful tool is used wisely, for the benefit of all, lest it exacerbate existing inequalities or create new ones. The algorithm’s gaze into the future, while remarkable, is just that – a glimpse, not a script. 

Sunday, December 3, 2023

ChatGPT one year on: who is using it, how and why?

Ghassemi, M., Birhane, A., et al.
Nature 624, 39-41 (2023)
doi: https://doi.org/10.1038/d41586-023-03798-6

Here is an excerpt:

More pressingly, text and image generation are prone to societal biases that cannot be easily fixed. In health care, this was illustrated by Tessa, a rule-based chatbot designed to help people with eating disorders, run by a US non-profit organization. After it was augmented with generative AI, the now-suspended bot gave detrimental advice. In some US hospitals, generative models are being used to manage and generate portions of electronic medical records. However, the large language models (LLMs) that underpin these systems are not giving medical advice and so do not require clearance by the US Food and Drug Administration. This means that it’s effectively up to the hospitals to ensure that LLM use is fair and accurate. This is a huge concern.

The use of generative AI tools, in general and in health settings, needs more research with an eye towards social responsibility rather than efficiency or profit. The tools are flexible and powerful enough to make billing and messaging faster — but a naive deployment will entrench existing equity issues in these areas. Chatbots have been found, for example, to recommend different treatments depending on a patient’s gender, race and ethnicity and socioeconomic status (see J. Kim et al. JAMA Netw. Open 6, e2338050; 2023).

Ultimately, it is important to recognize that generative models echo and extend the data they have been trained on. Making generative AI work to improve health equity, for instance by using empathy training or suggesting edits that decrease biases, is especially important given how susceptible humans are to convincing, and human-like, generated texts. Rather than taking the health-care system we have now and simply speeding it up — with the risk of exacerbating inequalities and throwing in hallucinations — AI needs to target improvement and transformation.

Here is my summary:

The article on ChatGPT's one-year anniversary presents a comprehensive analysis of its usage, exploring the diverse user base, applications, and underlying motivations driving its adoption. It reveals that ChatGPT has found traction across a wide spectrum of users, including writers, developers, students, professionals, and hobbyists. This broad appeal can be attributed to its adaptability in assisting with a myriad of tasks, from generating creative content to aiding in coding challenges and providing language translation support.

The analysis further dissects how users interact with ChatGPT, showcasing distinct patterns of utilization. Some users leverage it for brainstorming ideas, drafting content, or generating creative writing, while others turn to it for programming assistance, using it as a virtual coding companion. Additionally, the article explores the strategies users employ to enhance the model's output, such as providing more context or breaking down queries into smaller parts.  There are still issues with biases, inaccurate information, and inappropriate uses.

Tuesday, November 28, 2023

Ethics of psychotherapy rationing: A review of ethical and regulatory documents in Canadian professional psychology

Gower, H. K., & Gaine, G. S. (2023).
Canadian Psychology / Psychologie canadienne. 
Advance online publication.


Ethical and regulatory documents in Canadian professional psychology were reviewed for principles and standards related to the rationing of psychotherapy. Despite Canada’s high per capita health care expenses, mental health in Canada receives relatively low funding. Further, surveys indicated that Canadians have unmet needs for psychotherapy. Effective and ethical rationing of psychological treatment is a necessity, yet the topic of rationing in psychology has received scant attention. The present study involved a qualitative review of codes of ethics, codes of conduct, and standards of practice documents for their inclusion of rationing principles and standards. Findings highlight the strengths and shortcomings of these documents related to guiding psychotherapy rationing. The discussion offers recommendations for revising these ethical and regulatory documents to promote more equitable and cost-effective use of limited psychotherapy resources in Canada.

Impact Statement

Canadian professional psychology regulatory documents contain limited reference to rationing imperatives, despite scarce psychotherapy resources. While the foundation of distributive justice is in place, rationing-specific principles, standards, and practices are required to foster the fair and equitable distribution of psychotherapy by Canadian psychologists.

From the recommendations:

Recommendations for Canadian Psychology Regulatory Documents
  1. Explicitly widen psychologists’ scope of concern to include not only current clients but also waiting clients and those who need treatment but face access barriers.
  2. Acknowledge the scarcity of health care resources (in public and private settings) and the high demand for psychology services (e.g., psychotherapy) and admonish inefficient and cost-ineffective use.
  3. Draw an explicit connection between the general principle of distributive justice and the specific practices related to rationing of psychology resources, including, especially, mitigation of biases likely to weaken ethical decision making.
  4. Encourage the use of outcome monitoring measures to aid relative utility calculations for triage and termination decisions and to ensure efficiency and distributive justice.
  5. Recommend advocacy by psychologists to address barriers to accessing needed services (e.g., psychotherapy), including promoting the cost effectiveness of psychotherapy as well as highlighting systemic barriers related to presenting problem, disability, ethnicity, race, gender, sexuality, or income.

Tuesday, November 21, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).


One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Here is my summary:

The authors argue that many different biases, such as the bias blind spot, hostile media bias, egocentric/ethnocentric bias, and outcome bias, can be traced back to the combination of a fundamental prior belief and humans' tendency toward belief-consistent information processing.

Belief-consistent information processing is the process of attending to, interpreting, and remembering information in a way that is consistent with one's existing beliefs. This process can lead to biases when it results in people ignoring or downplaying information that is inconsistent with their beliefs, and giving undue weight to information that is consistent with their beliefs.

The authors propose that different biases can be distinguished by the specific belief that guides information processing. For example, the bias blind spot is characterized by the belief that one is less biased than others, while hostile media bias is characterized by the belief that the media is biased against one's own group. However, the authors also argue that different biases may share the same underlying belief, and differ only in the specific outcome of information processing that is assessed. For example, both the bias blind spot and hostile media bias may involve the belief that one is more objective than others, but the bias blind spot is assessed in the context of self-evaluations, while hostile media bias is assessed in the context of evaluations of others.

The authors' framework has several advantages over existing theoretical explanations of biases. First, it provides a more parsimonious explanation for a wide range of biases. Second, it generates novel hypotheses that can be tested empirically. For example, the authors hypothesize that people who are more likely to believe in one bias will also be more likely to believe in other biases. Third, the framework has implications for interventions to reduce biases. For example, the authors suggest that interventions to reduce biases could focus on helping people to become more aware of their own biases and to develop strategies for resisting the tendency toward belief-consistent information processing.

Sunday, November 19, 2023

AI Will—and Should—Change Medical School, Says Harvard’s Dean for Medical Education

Hswen Y, Abbasi J.
JAMA. Published online October 25, 2023.

Here is an excerpt:

Dr Bibbins-Domingo: When these types of generative AI tools first came into prominence or awareness, educators, whatever level of education they were involved with, had to scramble because their students were using them. They were figuring out how to put up the right types of guardrails, set the right types of rules. Are there rules or danger zones right now that you’re thinking about?

Dr Chang: Absolutely, and I think there’s quite a number of these. This is a focus that we’re embarking on right now because as exciting as the future is and as much potential as these generative AI tools have, there are also dangers and there are also concerns that we have to address.

One of them is helping our students, who like all of us are still new to this within the past year, understand the limitations of these tools. Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence. Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer. And in medicine of course, that’s very, very important. And so that’s one—just the accuracy and the validity of the content that comes out.

As I wrote about in my Viewpoint, the way that these tools work is basically a very fancy form of autocomplete, right? It is essentially using a probabilistic prediction of what the next word is going to be. And so there’s no separate validity or confirmation of the factual material, and that’s something that we need to make sure that our students understand.

The other thing is to address the fact that these tools may inherently be structurally biased. Now, why would that be? Well, as we know, ChatGPT and these other large language models [LLMs] are trained on the world’s internet, so to speak, right? They’re trained on the noncopyrighted corpus of material that’s out there on the web. And to the extent that that corpus of material was generated by human beings who in their postings and their writings exhibit bias in one way or the other, whether intentionally or not, that’s the corpus on which these LLMs are trained. So it only makes sense that when we use these tools, these tools are going to potentially exhibit evidence of bias. And so we need our students to be very aware of that. As we have worked to reduce the effects of systematic bias in our curriculum and in our clinical sphere, we need to recognize that as we introduce this new tool, this will be another potential source of bias.

Here is my summary:

Bernard Chang, the Dean for Medical Education at Harvard Medical School, argues that artificial intelligence (AI) is poised to transform medical education. AI has the potential to improve the way medical students learn and train, and that medical schools should not only embrace AI, but also take an active role in shaping its development and use.

Chang identifies several areas where AI could have a significant impact on medical education. First, AI could be used to personalize learning and provide students with more targeted feedback. For example, AI-powered tutors could help students learn complex medical concepts at their own pace, and AI-powered diagnostic tools could help students practice their clinical skills.

Second, AI could be used to automate tasks that are currently performed by human instructors, such as grading exams and providing feedback on student assignments. This would free up instructors to focus on more high-value activities, such as mentoring students and leading discussions.

Third, AI could be used to create new educational experiences that are not possible with traditional methods. For example, AI could be used to create virtual patients that students can interact with to practice their clinical skills. AI could also be used to develop simulations of complex medical procedures that students can practice in a safe environment.

Chang argues that medical schools have a responsibility to prepare students for the future of medicine, which will be increasingly reliant on AI. He writes that medical schools should teach students how to use AI effectively, and how to critically evaluate AI-generated information. Medical schools should also develop new curricula that take into account the potential impact of AI on medical practice.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.


Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.

My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Thursday, October 12, 2023

Patients need doctors who look like them. Can medicine diversify without affirmative action?

Kat Stafford
Originally posted 11 September 23

Here are two excerpts:

But more than two months after the Supreme Court struck down affirmative action in college admissions, concerns have arisen that a path into medicine may become much harder for students of color. Heightening the alarm: the medical field’s reckoning with longstanding health inequities.

Black Americans represent 13% of the U.S. population, yet just 6% of U.S. physicians are Black. Increasing representation among doctors is one solution experts believe could help disrupt health inequities.

The disparities stretch from birth to death, often beginning before Black babies take their first breath, a recent Associated Press series showed. Over and over, patients said their concerns were brushed aside or ignored, in part because of unchecked bias and racism within the medical system and a lack of representative care.

A UCLA study found the percentage of Black doctors had increased just 4% from 1900 to 2018.

But the affirmative action ruling dealt a “serious blow” to the medical field’s goals of improving that figure, the American Medical Association said, by prohibiting medical schools from considering race among many factors in admissions. The ruling, the AMA said, “will reverse gains made in the battle against health inequities.”

The consequences could affect Black health for generations to come, said Dr. Uché Blackstock, a New York emergency room physician and author of “LEGACY: A Black Physician Reckons with Racism in Medicine.”


“As medical professionals, any time we see disparities in care or outcomes of any kind, we have to look at the systems in which we are delivering care and we have to look at ways that we are falling short,” Wysong said.

Without affirmative action as a tool, career programs focused on engaging people of color could grow in importance.

For instance, the Pathways initiative engages students from Black, Latino and Indigenous communities from high school through medical school.

The program starts with building interest in dermatology as a career and continues to scholarships, workshops and mentorship programs. The goal: Increase the number of underrepresented dermatology residents from about 100 in 2022 to 250 by 2027, and grow the share of dermatology faculty who are members of color by 2%.

Tolliver credits her success in becoming a dermatologist in part to a scholarship she received through Ohio State University’s Young Scholars Program, which helps talented, first-generation Ohio students with financial need. The scholarship helped pave the way for medical school, but her involvement in the Pathways residency program also was central.