Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, August 23, 2022

Tackling Implicit Bias in Health Care

J. A. Sabin
N Engl J Med 2022; 387:105-107
DOI: 10.1056/NEJMp2201180

Implicit and explicit biases are among many factors that contribute to disparities in health and health care. Explicit biases, the attitudes and assumptions that we acknowledge as part of our personal belief systems, can be assessed directly by means of self-report. Explicit, overtly racist, sexist, and homophobic attitudes often underpin discriminatory actions. Implicit biases, by contrast, are attitudes and beliefs about race, ethnicity, age, ability, gender, or other characteristics that operate outside our conscious awareness and can be measured only indirectly. Implicit biases surreptitiously influence judgment and can, without intent, contribute to discriminatory behavior. A person can hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs.

Moreover, our individual biases operate within larger social, cultural, and economic structures whose biased policies and practices perpetuate systemic racism, sexism, and other forms of discrimination. In medicine, bias-driven discriminatory practices and policies not only negatively affect patient care and the medical training environment, but also limit the diversity of the health care workforce, lead to inequitable distribution of research funding, and can hinder career advancement.

A review of studies involving physicians, nurses, and other medical professionals found that health care providers’ implicit racial bias is associated with diagnostic uncertainty and, for Black patients, negative ratings of their clinical interactions, less patient-centeredness, poor provider communication, undertreatment of pain, views of Black patients as less medically adherent than White patients, and other ill effects.1 These biases are learned from cultural exposure and internalized over time: in one study, 48.7% of U.S. medical students surveyed reported having been exposed to negative comments about Black patients by attending or resident physicians, and those students demonstrated significantly greater implicit racial bias in year 4 than they had in year 1.

A review of the literature on reducing implicit bias, which examined evidence on many approaches and strategies, revealed that methods such as exposure to counterstereotypical exemplars, recognizing and understanding others’ perspectives, and appeals to egalitarian values have not resulted in reduction of implicit biases.2 Indeed, no interventions for reducing implicit biases have been shown to have enduring effects. Therefore, it makes sense for health care organizations to forgo bias-reduction interventions and focus instead on eliminating discriminatory behavior and other harms caused by implicit bias.

Though pervasive, implicit bias is hidden and difficult to recognize, especially in oneself. It can be assumed that we all hold implicit biases, but both individual and organizational actions can combat the harms caused by these attitudes and beliefs. Awareness of bias is one step toward behavior change. There are various ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own mistaken assumptions, and critically reflecting on biased behavior that we engage in or experience. Gonzalez and colleagues offer 12 tips for teaching recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical reflection exercises, and engaging learners in skill-building exercises and activities in which they must embrace their discomfort.

Monday, August 22, 2022

Meta-Analysis of Inequality Aversion Estimates

Nunnari, S., & Pozzi, M. (2022).
SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.4169385

Abstract

Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a large-scale interdisciplinary meta-analysis, to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported during the past couple of decades. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is between 1.8 and 2.1. We also document how reported estimates vary depending on the observable characteristics of the study design.

Conclusion

In this paper, we reported the results of a meta-analysis of empirical estimates of the inequality aversion coefficients in models of outcome-based other-regarding preferences `a la Fehr and Schmidt (1999). We conduct both a frequentist analysis (using a multi-level random-effects model) and a Bayesian analysis (using a Bayesian hierarchical model) to provide a “weighted average” for α and β. The results from the two approaches are nearly identical and support the hypothesis of inequality concerns. From the frequentist analysis, we learn that the mean envy coefficient is 0.425 with a 95% confidence interval of [0.244, 0.606]; the mean guilt coefficient is, instead, 0.291 with a 95% confidence interval [0.218, 0.363]. This means that, on average, an individual is willing to spend € 0.41 to increase others’ earnings by €1 when ahead, and € 0.74 to decrease others’ earnings by €1 when behind. The theoretical assumptions α ≥ β and 0 ≤ β < 1 are upheld in our empirical analysis, but we cannot conclude that the disadvantageous inequality coefficient is statistically greater than the coefficient for advantageous inequality. We also observe no correlation between the two parameters.

Sunday, August 21, 2022

Medial and orbital frontal cortex in decision-making and flexible behavior

Klein-Flügge, M. C., Bongioanni, A., & 
Rushworth, M. F. (2022).
Neuron.
https://doi.org/10.1016/j.neuron.2022.05.022

Summary

The medial frontal cortex and adjacent orbitofrontal cortex have been the focus of investigations of decision-making, behavioral flexibility, and social behavior. We review studies conducted in humans, macaques, and rodents and argue that several regions with different functional roles can be identified in the dorsal anterior cingulate cortex, perigenual anterior cingulate cortex, anterior medial frontal cortex, ventromedial prefrontal cortex, and medial and lateral parts of the orbitofrontal cortex. There is increasing evidence that the manner in which these areas represent the value of the environment and specific choices is different from subcortical brain regions and more complex than previously thought. Although activity in some regions reflects distributions of reward and opportunities across the environment, in other cases, activity reflects the structural relationships between features of the environment that animals can use to infer what decision to take even if they have not encountered identical opportunities in the past.

Summary

Neural systems that represent the value of the environment exist in many vertebrates. An extended subcortical circuit spanning the striatum, midbrain, and brainstem nuclei of mammals corresponds to these ancient systems. In addition, however, mammals possess several frontal cortical regions concerned with guidance of decision-making and adaptive, flexible behavior. Although these frontal systems interact extensively with these subcortical circuits, they make specific contributions to behavior and also influence behavior via other cortical routes. Some areas such as the ACC, which is present in a broad range of mammals, represent the distribution of opportunities in an environment over space and time, whereas other brain regions such as amFC and dmPFC have roles in representing structural associations and causal links between environmental features, including aspects of the social environment (Figure 8). Although the origins of these areas and their functions are traceable to rodents, they are especially prominent in primates. They make it possible not just to select choices on the basis of past experience of identical situations, but to make inferences to guide decisions in new scenarios.

Saturday, August 20, 2022

Truth by Repetition … without repetition: Testing the effect of instructed repetition on truth judgments

Mattavelli, S., Corneille, O., & Unkelbach, C.
Journal of Experimental Psychology
Learning Memory and Cognition
June 2022

Abstract

Past research indicates that people judge repeated statements as more true than new ones. An experiential consequence of repetition that may underly this “truth effect” is processing fluency: processing statements feels easier following their repetition. In three preregistered experiments (N=684), we examined the effect of merely instructed repetition (i.e., not experienced) on truth judgments. Experiments 1-2 instructed participants that some statements were present (vs. absent) in an exposure phase allegedly undergone by other individuals. We then asked them to rate such statements based on how they thought those individuals would have done. Overall, participants rated repeated statements as more true than new statements. The instruction-based repetition effects were significant but also significantly weaker than those elicited by the experience of repetition (Experiments 1 & 2). Additionally, Experiment 2 clarified that adding a repetition status tag in the experienced repetition condition did not impact truth judgments. Experiment 3 further showed that the instruction-based effect was still detectable when participants provided truth judgments for themselves rather than estimating other people’s judgments. We discuss the mechanisms that can explain these effects and their implications for advancing our understanding of the truth effect.

(Beginning of the) General Discussion 

Deciding whether information is true or false is a challenging task. Extensive research showed that one key variable that people often use to judge the truth of a statement is repetition (e.g., Hasher et al. 1977): repeated statements are judged more true than new ones (see Dechêne et al., 2010). Virtually all explanations of this truth effect refer to the processing consequences of repetition: higher recognition rates than new statements, higher familiarity, and higher fluency (see Unkelbach et al., 2019). However, in many communication situations, people get to know that a statement is repeated (e.g., it occurred frequently) without prior exposure to the statement. Here, we asked whether repetition can be used as a cue for truth without prior exposure, and thus, in the absence of experiential consequences of repetition such as fluency. 

Conclusion 

This work represents the first attempt to assess the impact of instructed repetition on truth judgments. We found that the truth effect was stronger when repetition was experienced rather than merely instructed in three experiments. However, we provided initial evidence that a component of the effect is unrelated to the experience of repetition. A truth effect was still detectable in the absence of any internal cue (i.e., fluency) induced by the experienced repetition of the statement and, therefore, should be conditional upon learning history or naïve beliefs. This finding paves the way for new research avenues interested in isolating the unique contribution of known repetition and experienced fluency on truth judgments.


This research has multiple applications to psychotherapy, including how do patients know what information about self and others is true, and how much is due to repetition or internal cues, beliefs, or feelings.  Human beings are meaning makers, and try to assess how the world functions based on the meaning projected toward others.

Friday, August 19, 2022

Too cynical to reconnect: Cynicism moderates the effect of social exclusion on prosociality through empathy

B. K. C. Choy, K. Eom, & N. P. Li
Personality and Individual Differences
Volume 178, August 2021, 110871

Abstract

Extant findings are mixed on whether social exclusion impacts prosociality. We propose one factor that may underlie the mixed results: Cynicism. Specifically, cynicism may moderate the exclusion-prosociality link by influencing interpersonal empathy. Compared to less cynical individuals, we expected highly cynical individuals who were excluded to experience less empathy and, consequently, less prosocial behavior. Using an online ball-tossing game, participants were randomly assigned to an exclusion or inclusion condition. Consistent with our predictions, the effect of social exclusion on prosociality through empathy was contingent on cynicism, such that only less-cynical individuals responded to exclusion with greater empathy, which, in turn, was associated with higher levels of prosocial behavior. We further showed this effect to hold for cynicism, but not other similar traits typically characterized by high disagreeableness. Findings contribute to the social exclusion literature by suggesting a key variable that may moderate social exclusion's impact on resultant empathy and prosocial behavior and are consistent with the perspective that people who are excluded try to not only become included again but to establish alliances characterized by reciprocity.

From the Discussion

While others have proposed that empathy may be reflexively inhibited upon exclusion (DeWall & Baumeister, 2006; Twenge et al., 2007), our findings indicate that this process of inhibition—at least for empathy—may be more flexible than previously thought. If reflexive, individuals would have shown a similar level of empathy regardless of cynicism. That highly- and less-cynical individuals displayed different levels of empathy indicates that some other processes are in play. Our interpretation is that the process through which empathy is exhibited or inhibited may depend on one’s appraisals of the physical and social situation. 

Importantly, unlike cynicism, other similarly disagreeable dispositional traits such as Machiavellianism, psychopathy, and SDO (Social Dominance Orientation) did not modulate the empathy-mediated link between social exclusion and prosociality. This suggests that cynicism is conceptually different from other traits of a seemingly negative nature. Indeed, whereas cynics may hold a negative view of the intentions of others around them, Machiavellians are characterized by a negative view of others’ competence and a pragmatic and strategic approach to social interactions (Jones, 2016). Similarly, whereas cynics view others’ emotions as ingenuine, psychopathic individuals are further distinguished by their high levels of callousness and impulsivity (Paulhus, 2014). Likewise, whereas cynics may view the world as inherently competitive, they may not display the same preference for hierarchy that high-SDO individuals do (Ho et al., 21015). Thus, despite the similarities between these traits, our findings affirm their substantive differences from cynicism. 

Thursday, August 18, 2022

Dunning–Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence

Pennycook, G., Ross, R.M., Koehler, D.J. et al. 
Psychon Bull Rev 24, 1774–1784 (2017). 
https://doi.org/10.3758/s13423-017-1242-7

Abstract

The Dunning–Kruger effect refers to the observation that the incompetent are often ill-suited to recognize their incompetence. Here we investigated potential Dunning–Kruger effects in high-level reasoning and, in particular, focused on the relative effectiveness of metacognitive monitoring among particularly biased reasoners. Participants who made the greatest numbers of errors on the cognitive reflection test (CRT) overestimated their performance on this test by a factor of more than 3. Overestimation decreased as CRT performance increased, and those who scored particularly high underestimated their performance. Evidence for this type of systematic miscalibration was also found on a self-report measure of analytic-thinking disposition. Namely, genuinely nonanalytic participants (on the basis of CRT performance) overreported their “need for cognition” (NC), indicating that they were dispositionally analytic when their objective performance indicated otherwise. Furthermore, estimated CRT performance was just as strong a predictor of NC as was actual CRT performance. Our results provide evidence for Dunning–Kruger effects both in estimated performance on the CRT and in self-reported analytic-thinking disposition. These findings indicate that part of the reason why people are biased is that they are either unaware of or indifferent to their own bias.

General discussion

Our results provide empirical support for Dunning–Kruger effects in both estimates of reasoning performance and self-reported thinking disposition. Particularly intuitive individuals greatly overestimated their performance on the CRT—a tendency that diminished and eventually reversed among increasingly analytic individuals. Moreover, self-reported analytic-thinking disposition—as measured by the Ability and Engagement subscales of the NC scale—was just as strongly (if not more strongly) correlated with estimated CRT performance than with actual CRT performance. In addition, an analysis using an additional performance-based measure of analytic thinking—the heuristics-and-biases battery—revealed a systematic miscalibration of self-reported NC, wherein relatively intuitive individuals report that they are more analytic than is justified by their objective performance. Together, these findings indicate that participants who are low in analytic thinking (so-called “intuitive thinkers”) are at least somewhat unaware of (or unresponsive to) their propensity to rely on intuition in lieu of analytic thought during decision making. This conclusion is consistent with previous research that has suggested that the propensity to think analytically facilitates metacognitive monitoring during reasoning (Pennycook et al., 2015b; Thompson & Johnson, 2014). Those who are genuinely analytic are aware of the strengths and weaknesses of their reasoning, whereas those who are genuinely nonanalytic are perhaps best described as “happy fools” (De Neys et al., 2013).

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 
https://doi.org/10.31234/osf.io/hsjta

Abstract

Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Monday, August 15, 2022

Modular Morals: Mapping the organisation of the moral brain

Wilkinson, J. Curry, O.S., et al.
OSF Home
Last Updated: 2022-07-12

Abstract

Is morality the product of multiple domain-specific psychological mechanisms, or one domain-general mechanism? Previous research suggests that morality consists of a range of solutions to the problems of cooperation recurrent in human social life. This theory of ‘morality as cooperation’ suggests that there are (at least) seven specific moral domains: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. However, it is unclear how these types of morality are implemented at the neuroanatomical level. The possibilities are that morality is (1) the product of multiple distinct domain-specific adaptations for cooperation, (2) the product of a single domain-general adaptation which learns a range of moral rules, or (3) the product of some combination of domain-specific and domain-general adaptations. To distinguish between these possibilities, we first conducted an anatomical likelihood estimation meta-analysis of previous studies investigating the relationship between these seven moral domains and neuroanatomy. This meta-analysis provided evidence for a combination of specific and general adaptations. Next, we investigated the relationship between the seven types of morality – as measured by the Morality as Cooperation Questionnaire (Relevance) – and grey matter volume in a large neuroimaging (n=607) sample. No associations between moral values and grey matter volume survived whole-brain exploratory testing. We conclude that whatever combination of mechanisms are responsible for morality, either they are not neuroanatomically localised, or else their localisation is not manifested in grey matter volume. Future research should employ phylogenetically informed a priori predictions, as well as alternative measures of morality and of brain function.