Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Implicit Bias. Show all posts
Showing posts with label Implicit Bias. Show all posts

Monday, March 13, 2023

Intersectional implicit bias: Evidence for asymmetrically compounding bias and the predominance of target gender

Connor, P., Weeks, M., et al. (2023).
Journal of Personality and Social Psychology,
124(1), 22–48.


Little is known about implicit evaluations of complex, multiply categorizable social targets. Across five studies (N = 5,204), we investigated implicit evaluations of targets varying in race, gender, social class, and age. Overall, the largest and most consistent evaluative bias was pro-women/anti-men bias, followed by smaller but nonetheless consistent pro-upper-class/anti-lower-class biases. By contrast, we observed less consistent effects of targets’ race, no effects of targets’ age, and no consistent interactions between target-level categories. An integrative data analysis highlighted a number of moderating factors, but a stable pro-women/anti-men and pro-upper-class/anti-lower-class bias across demographic groups. Overall, these results suggest that implicit biases compound across multiple categories asymmetrically, with a dominant category (here, gender) largely driving evaluations, and ancillary categories (here, social class and race) exerting relatively smaller additional effects. We discuss potential implications of this work for understanding how implicit biases operate in real-world social settings. 

General Discussion

Implicit bias is central to the study of social cognition. Given that people are multiply categorizable, understanding the influences of such intersectionality upon implicit bias is likely to be vital for understanding its effects in everyday social contexts. In the present research, we examined implicit evaluations of multiply categorizable social targets, testing two competing theories about intersectional intergroup bias. We also developed and tested the reliability of a novel method of measuring and modelling implicit bias at the level of individual targets.

In Study 1 we observed implicit evaluations of Black and White males to be driven solely by targets' social class with bias favoring upper-class over lower-class targets. In Study 2, we measured implicit evaluations of targets varying in race, gender, social class, and age, and found results to be primarily driven by a specific positive bias favoring upper-class female targets. In Study 3, we used similarly intersectional targets, and explored the impact of portraying targets in full-body versus upper body photographs on implicit evaluations. Here, we observed effects of targets’ race, with Asian and White targets evaluated more positively than Black targets, and of targets’ social class, with upper-class targets evaluated more positively than lower-class targets (though only when targets were displayed in full-body presentation). Most striking, however, was the dominant effect of target gender, with positive/negative evaluations of female/male targets accounting for the majority of variance in implicit bias.

In Study 4 we tested the generalizability of these results by recruiting representative samples of US adults, and measuring implicit evaluations not just via ST-IATs, but also via EPTs and AMPs. Across all measures, we observed target gender to be the largest driver of implicit evaluations, though its dominance was less pronounced in EPTs and AMPs than in ST-IATs. We also again observed effects of targets’ social class and race, though the effect of race was inconsistent across tasks, with participants displaying anti-Black bias in the ST-IAT, pro-Asian bias in the EPT, and anti-White bias in the AMP. Finally, in Study 5 we conducted an integrative data analysis to test a number of potential moderating factors. Results showed that while all groups of participants displayed pro-female implicit gender bias and pro-upper-class implicit social class bias, both biases were stronger among women than men. Results also showed the effect of race varied across racial groups, with Asians displaying a preference for Asian over White and Black targets, Black participants displaying a preference for Asian and Black targets over White targets, Latinos displaying a preference for Asian over Black targets, and Whites displaying no significant racial bias.

Tuesday, August 23, 2022

Tackling Implicit Bias in Health Care

J. A. Sabin
N Engl J Med 2022; 387:105-107
DOI: 10.1056/NEJMp2201180

Implicit and explicit biases are among many factors that contribute to disparities in health and health care. Explicit biases, the attitudes and assumptions that we acknowledge as part of our personal belief systems, can be assessed directly by means of self-report. Explicit, overtly racist, sexist, and homophobic attitudes often underpin discriminatory actions. Implicit biases, by contrast, are attitudes and beliefs about race, ethnicity, age, ability, gender, or other characteristics that operate outside our conscious awareness and can be measured only indirectly. Implicit biases surreptitiously influence judgment and can, without intent, contribute to discriminatory behavior. A person can hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs.

Moreover, our individual biases operate within larger social, cultural, and economic structures whose biased policies and practices perpetuate systemic racism, sexism, and other forms of discrimination. In medicine, bias-driven discriminatory practices and policies not only negatively affect patient care and the medical training environment, but also limit the diversity of the health care workforce, lead to inequitable distribution of research funding, and can hinder career advancement.

A review of studies involving physicians, nurses, and other medical professionals found that health care providers’ implicit racial bias is associated with diagnostic uncertainty and, for Black patients, negative ratings of their clinical interactions, less patient-centeredness, poor provider communication, undertreatment of pain, views of Black patients as less medically adherent than White patients, and other ill effects.1 These biases are learned from cultural exposure and internalized over time: in one study, 48.7% of U.S. medical students surveyed reported having been exposed to negative comments about Black patients by attending or resident physicians, and those students demonstrated significantly greater implicit racial bias in year 4 than they had in year 1.

A review of the literature on reducing implicit bias, which examined evidence on many approaches and strategies, revealed that methods such as exposure to counterstereotypical exemplars, recognizing and understanding others’ perspectives, and appeals to egalitarian values have not resulted in reduction of implicit biases.2 Indeed, no interventions for reducing implicit biases have been shown to have enduring effects. Therefore, it makes sense for health care organizations to forgo bias-reduction interventions and focus instead on eliminating discriminatory behavior and other harms caused by implicit bias.

Though pervasive, implicit bias is hidden and difficult to recognize, especially in oneself. It can be assumed that we all hold implicit biases, but both individual and organizational actions can combat the harms caused by these attitudes and beliefs. Awareness of bias is one step toward behavior change. There are various ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own mistaken assumptions, and critically reflecting on biased behavior that we engage in or experience. Gonzalez and colleagues offer 12 tips for teaching recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical reflection exercises, and engaging learners in skill-building exercises and activities in which they must embrace their discomfort.

Saturday, August 6, 2022

A General Model of Cognitive Bias in Human Judgment and Systematic Review Specific to Forensic Mental Health

Neal, T. M. S., Lienert, P., Denne, E., & 
Singh, J. P. (2022).  
Law and Human Behavior, 46(2), 99–120.


Cognitive biases can impact experts’ judgments and decisions. We offer a broad descriptive model of how bias affects human judgment. Although studies have explored the role of cognitive biases and debiasing techniques in forensic mental health, we conducted the first systematic review to identify, evaluate, and summarize the findings. Hypotheses. Given the exploratory nature of this review, we did not test formal hypotheses. General research questions included the proportion of studies focusing on cognitive biases and/or debiasing, the research methods applied, the cognitive biases and debiasing strategies empirically studied in the forensic context, their effects on forensic mental health decisions, and effect sizes.

Public Significance Statement

Evidence of bias in forensic mental health emerged in ways consistent with what we know about human judgment broadly. We know less about how to debias judgments—an important frontier for future research. Better understanding how bias works and developing effective debiasing strategies tailored to the forensic mental health context hold promise for improving quality. Until then, we can use what we know now to limit bias in our work.

From the Discussion section

Is Bias a Problem for the Field of Forensic Mental Health?

Our interpretation of the judgment and decision-making literature more broadly, as well as the results from this systematic review conducted in this specific context, is that bias is an issue that deserves attention in forensic mental health—with some nuance. The overall assertion that bias is worthy of concern in forensic mental health rests both on the broader and the more specific literatures we reference here.

The broader literature is robust, revealing that well-studied biases affect human judgment and social cognition (e.g., Gilovich et al., 2002; Kahneman, 2011; see Figure 1). Although the field is robust in terms of individual studies demonstrating cognitive biases, decision science needs a credible, scientific organization of the various types of cognitive biases that have proliferated to better situate and organize the field. Even in the apparent absence of such an organizational structure, it is clear that biases influence consequential judgments not just for laypeople but for experts too, such as pilots (e.g., Walmsley & Gilbey, 2016), intelligence analysts (e.g., Reyna et al., 2014), doctors (e.g., Drew et al., 2013), and judges and lawyers (e.g., Englich et al., 2006; Girvan et al., 2015; Rachlinski et al., 2009). Given that forensic mental health experts are human, as are these other experts who demonstrate typical biases by virtue of being human, there is no reason to believe that forensic experts have automatic special protection against bias by virtue of their expertise.

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).


Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.


Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

Note: If you are a healthcare professional, you need to be aware of these biases.

Thursday, January 9, 2020

How implicit bias harms patient care

Jeff Bendix
Originally posted 25 Nov 19

Here is an excerpt:

While many people have difficulty acknowledging that their actions are influenced by unconscious biases, the concept is particularly troubling for doctors, who have been trained to view—and treat—patients equally, and the vast majority of whom sincerely believe that they do.

“Doctors have been molded throughout medical school and all our training to be non-prejudiced when it comes to treating patients,” says James Allen, MD, a pulmonologist and medical director of University Hospital East, part of Ohio State University’s Wexner Medical Center. “It’s not only asked of us, it’s demanded of us, so many physicians would like to think they have no biases. But it’s not true. All human beings have biases.”

“Among physicians, there’s a stigma attached to any suggestion of racial bias,” adds Penner. “And were a person to be identified that way, there could be very severe consequences in terms of their career prospects or even maintaining their license.”

Ironically, as Penner and others point out, the conditions under which most doctors practice today—high levels of stress, frequent distractions, and brief visits that allow little time to get to know patients--are the ones most likely to heighten their vulnerability to unintentional biases.

“A doctor under time pressure from a backlog of overdue charting and whatever else they’re dealing with will have a harder time treating all patients with the same level of empathy and concern,” van Ryn says.

The info is here.

Thursday, August 23, 2018

Implicit Bias in Patient Care: An Endemic Blight on Quality Care

JoAnn Grif Alspach
Critical Care Nurse
August 2018 vol. 38 no. 4 12-16

Here is an excerpt:

How Implicit Bias Is Manifested

A systematic review by Hall and colleagues revealed that implicit bias is manifested in 4 key areas: patient-provider interactions, treatment decisions, treatment adherence, and patient health outcomes. How a physician communicates, including verbal cues, body language, and nonverbal behavior (physical proximity, frequency of eye contact) may manifest subconscious bias.7,10 Several investigators found evidence that providers interact more effectively with white than nonwhite patients. Bias may affect the nature and extent of diagnostic assessments and the range and scope of therapies considered. Nonwhite patients receive fewer cardiovascular interventions and kidney transplants. One meta-analysis found that 20 of 25 assumption method studies demonstrated bias either in the diagnosis, treatment recommendations, number of questions asked, or tests ordered. Women are 3 times less likely than men to receive knee arthroplasty despite comparable indications. Bias can detrimentally affect whether patients seek or return for care, follow treatment protocols, and, perhaps cumulatively, can influence outcomes of care. Numerous research studies offer evidence that implicit bias is associated with higher complication rates, greater morbidity, and higher patient mortality.

The info is here.

Monday, February 19, 2018

Culture and Moral Distress: What’s the Connection and Why Does It Matter?

Nancy Berlinger and Annalise Berlinger
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 608-616.


Culture is learned behavior shared among members of a group and from generation to generation within that group. In health care work, references to “culture” may also function as code for ethical uncertainty or moral distress concerning patients, families, or populations. This paper analyzes how culture can be a factor in patient-care situations that produce moral distress. It discusses three common, problematic situations in which assumptions about culture may mask more complex problems concerning family dynamics, structural barriers to health care access, or implicit bias. We offer sets of practical recommendations to encourage learning, critical thinking, and professional reflection among students, clinicians, and clinical educators.

Here is an excerpt:

Clinicians’ shortcuts for identifying “problem” patients or “difficult” families might also reveal implicit biases concerning groups. Health care professionals should understand the difference between cultural understanding that helps them respond to patients’ needs and concerns and implicit bias expressed in “cultural” terms that can perpetuate stereotypes or obscure understanding. A way to identify biased thinking that may reflect institutional culture is to consider these questions about advocacy:

  1. Which patients or families does our system expect to advocate for themselves?
  2. Which patients or families would we perceive or characterize as “angry” or “demanding” if they attempted to advocate for themselves?
  3. Which patients or families do we choose to advocate for, and on what grounds?
  4. What is our basis for each of these judgments?

Monday, February 13, 2017

The "Bad Is Black" Effect: Why People Believe Evildoers Have Darker Skin Than Do-Gooders

Alter, A., Stern, C. Granot, Y., & Balcetis, E.
Pers Soc Psychol Bull. 2016 Dec;42(12):1653-1665.


Across six studies, people used a "bad is black" heuristic in social judgment and assumed that immoral acts were committed by people with darker skin tones, regardless of the racial background of those immoral actors. In archival studies of news articles written about Black and White celebrities in popular culture magazines (Study 1a) and American politicians (Study 1b), the more critical rather than complimentary the stories, the darker the skin tone of the photographs printed with the article. In the remaining four studies, participants associated immoral acts with darker skinned people when examining surveillance footage (Studies 2 and 4), and when matching headshots to good and bad actions (Studies 3 and 5). We additionally found that both race-based (Studies 2, 3, and 5) and shade-based (Studies 4 and 5) associations between badness and darkness determine whether people demonstrate the "bad is black" effect. We discuss implications for social perception and eyewitness identification.

The article is here.

Friday, October 28, 2016

Is “Allison” more likely than “Lakisha” to get a call back from counseling professionals: A racism audit study.

Shin, R. Q., Smith, L.C., Welch, J., Ezeofor, I. (in press).
The Counseling Psychologist


Using an audit study, we studied racially biased call back responses in the mental health field by leaving voicemails soliciting services with practicing counselors and psychologists (N = 371). To manipulate perceived race, an actor identified herself with either a Black or White sounding name. While the difference in callback rate between the two names was not significant, the difference in voice messages from therapists that either promoted potential services or impeded services was significant. The caller with the White-sounding name received voice messages that promoted the potential for services at a 12% higher rate than the caller with the Black sounding name. Limitations, future directions for research, and counseling implications are discussed.

A review of the article is here.

Monday, August 29, 2016

Implicit bias is a challenge even for judges

Terry Carter
ABA Journal
Originally posted August 5, 2016

Judges are tasked with being the most impartial members of the legal profession. On Friday afternoon, more than 50 of them discussed how this isn’t so easy to do—and perhaps even impossible when it comes to implicit bias.

But working to overcome biases we don’t recognize is a job that is as necessary as it is worth doing.

“We view our job functions through the lens of our experiences, and all of us are impacted by biases and stereotypes and other cognitive functions that enable us to take shortcuts in what we do,” 6th U.S. Circuit Court of Appeals Judge Bernice B. Donald told a gathering of judges, state and federal, from around the country. Donald was on a panel for a program by the ABA’s Judicial Division, titled “Implicit Bias and De-Biasing Strategies: A Workshop for Judges and Lawyers,” at the association’s annual meeting in San Francisco.

The post is here.

Tuesday, November 18, 2014

Who Accepts Responsibility for Their Transgressions?

By Karina Schumann and Carol S. Dweck
Pers Soc Psychol Bull 0146167214552789,
first published on September 24, 2014
doi: 10.1177/0146167214552789


After committing an offense, transgressors can optimize their chances of reconciling with the victim by accepting responsibility. However, transgressors may be motivated to avoid admitting fault because it can feel threatening to accept blame for harmful behavior. Who, then, is likely to accept responsibility for a transgression? We examined how implicit theories of personality-whether people see personality as malleable (incremental theory) or fixed (entity theory)-influence transgressors' likelihood of accepting responsibility. We argue that incremental theorists may feel less threatened by accepting responsibility because they are more likely to view the situation as an opportunity for them to grow as a person and develop their relationship with the victim. We found support for our predictions across four studies using a combination of real-world and hypothetical offenses, and correlational and experimental methods. These studies therefore identify an important individual difference factor that can lead to more effective responses from transgressors.

The entire article is here, behind a paywall.

Friday, December 27, 2013

Jennifer Saul on Implicit Bias

Originally published December 7, 2013

Are we more biased than we imagine? In this episode of the Philosophy Bites podcast Jennifer Saul investigates a range of ways in which we are prone to implicit bias and the philosophical implications of these biases.

The podcast is here.