Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Biases. Show all posts
Showing posts with label Biases. Show all posts

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Tuesday, December 6, 2022

Countering cognitive biases on experts’ objectivity in court

Kathryn A. LaFortune
Monitor on Psychology
Vol. 53 No. 6
Print version: page 47

Mental health professionals’ opinions can be extremely influential in legal proceedings. Yet, current research is inconclusive about the effects of various cognitive biases on experts’ objectivity when making forensic mental health judgments and which biases most influence these decisions, according to a 2022 study in Law and Human Behavior by psychologists Tess Neal, Pascal Lienert, Emily Denne, and Jay Singh (Vol. 46, No. 2, 2022). The study also pointed to the need for more research on which debiasing strategies effectively counter bias in forensic mental health decisions and whether there should be specific policies and procedures to address these unique aspects of forensic work in mental health.

In the study, researchers conducted a systematic review of the relevant literature in forensic mental health decision-making. “Bias” was not generally defined in most of the available studies reviewed in the context of researching forensic mental health judgments. Their study noted that only a few forms of bias have been explored as they pertain specifically to forensic mental health professionals’ opinions. Adversarial allegiance, confirmation bias, hindsight bias, and bias blind spot have not been rigorously studied for potential negative effects on forensic mental health expert opinions across different contexts.

The importance of addressing these concerns is heightened when considering APA’s Ethics Code provisions that require psychologists to decline a professional role if bias may diminish their objectivity (See, Ethical Principles of Psychologists and Code of Conduct, Section 3.06). Similarly, the Specialty Guidelines for Forensic Psychologists advises forensic practitioners to decline participation in cases when potential biases may impact their impartiality or to take steps to correct or limit the effects of the bias (Section 2.07). That said, unlike in other professions where tasks are often repetitive, decision-making in the field of forensic psychology is impacted by the unique nature of the various referrals that forensic psychologists receive, making it even more difficult to expect them to consider and correct how their culture, attitudes, values, beliefs, and biases might affect their work. They engage in greater subjectivity in selecting assessment tools from a large array of available tests, none of which are uniformly adopted in cases, in part because of the wide range of questions experts often must answer to assist the court and the current lack of standardized methods. Neither do experts typically receive immediate feedback on their opinions. This study also noted that the only debiasing strategy shown to be effective for forensic psychologists was to “consider the opposite,” in which experts ask themselves why their opinions might be wrong and what alternatives they may have considered.

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Monday, September 19, 2022

The impact of economic inequality on conspiracy beliefs

Salvador Casara, B. G., Suitner, C., & Jetten, J.
(2022). Journal of Experimental Social 
Psychology, 98, 104245.
https://doi.org/10.1016/j.jesp.2021.104245

Abstract

Previous literature highlights the crucial role of economic inequality in triggering a range of negative societal outcomes. However, the relationship between economic inequality and the proliferation of conspiracy beliefs remains unexplored. Here, we explore the endorsement of conspiracy beliefs as an outcome of objective country-level (Study 1a, 1b, 1c), perceived (Study 2), and manipulated economic inequality (Studies 3a, 3b, 4a, 4b). In the correlational studies, both objective and perceived economic inequality were associated with greater conspiracy beliefs. In the experiments, participants in the high (compared to the low) inequality condition were more likely to endorse conspiratorial narratives. This effect was fully mediated by anomie (Studies 3a, 3b) suggesting that inequality enhances the perception that society is breaking down (anomie), which in turn increases conspiratorial thinking, possibly in an attempt to regain some sense of order and control. Furthermore, the link between economic inequality and conspiracy beliefs was stronger when participants endorsed a conspiracy worldview (Studies 4a, 4b). Moreover, conspiracy beliefs mediated the effect of the economic inequality manipulation on willingness to engage in collective action aimed at addressing economic inequality. The results show that economic inequality and conspiracy beliefs go hand in hand: economic inequality can cause conspiratorial thinking and conspiracy beliefs can motivate collective action against economic inequality.

From the General Discussion

It is also important to consider whether economic inequality triggers the endorsement of general or more specific conspiracy beliefs. Data from Studies 3a and 3b showed that the manipulation of economic inequality affects the endorsement of a wide range of conspiracy beliefs— general conspiracy beliefs as well as conspiracies that relate to the specific fictional society. In Studies 4a and 4b, we found that inequality enhanced the belief in conspiracies perpetrated by different groups in the specific fictional society (i.e., politicians, scientists, multinational companies, and pharmaceutical industries) while it did not affect participants’ conspiracy worldview. Future research should focus on the impact of economic inequality on the endorsement of specific versus more general conspiracy theories. It may well be the case that the relation between economic inequality and conspiracy belief endorsement is stronger when participants consider specific conspiracy beliefs that blame an outgroup for heightened anomie that results from economic inequality. Such conspiracy beliefs best serve the function of mobilizing collective ingroup action that might hold the promise of providing people with a sense of collective agency (or control; see Bukowski et al., 2017).

These results have important implications. First, those who are prone to believe in conspiracy theories are sometimes viewed as driven by irrationality — a vision that is indeed supported by a vast literature about the negative consequences of conspiracy beliefs (e.g., Jolley & Douglas, 2014; Lewandowsky et al., 2013; Van der Linden, 2015). Other findings show that conspiracy beliefs are associated with dispositional constructs that are prodromal of mental disease, such as schizotypy and delusional thinking (Barron et al., 2018; Darwin et al., 2011). However, factors that trigger conspiracy beliefs are not always irrational and they may be driven by anomie-prompted socio-structural perceptions about societies, such as economic inequality. 

Thursday, September 8, 2022

Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues

Light, N. et al. 
Science Advances, 20 Jul 2022
Vol 8, Issue 29
DOI: 10.1126/sciadv.abo0038

Abstract

Public attitudes that are in opposition to scientific consensus can be disastrous and include rejection of vaccines and opposition to climate change mitigation policies. Five studies examine the interrelationships between opposition to expert consensus on controversial scientific issues, how much people actually know about these issues, and how much they think they know. Across seven critical issues that enjoy substantial scientific consensus, as well as attitudes toward COVID-19 vaccines and mitigation measures like mask wearing and social distancing, results indicate that those with the highest levels of opposition have the lowest levels of objective knowledge but the highest levels of subjective knowledge. Implications for scientists, policymakers, and science communicators are discussed.

Discussion

Results from five studies show that the people who disagree most with the scientific consensus know less about the relevant issues, but they think they know more. These results suggest that this phenomenon is fairly general, although the relationships were weaker for some more polarized issues, particularly climate change. It is important to note that we document larger mismatches between subjective and objective knowledge among participants who are more opposed to the scientific consensus. Thus, although broadly consistent with the Dunning-Kruger effect and other research on knowledge miscalibration, our findings represent a pattern of relationships that goes beyond overconfidence among the least knowledgeable. However, the data are correlational, and the normal caveats apply.

A strength of these studies is the consistency of the main result across the overall models in studies 1 to 3 and specific (but different) instantiations of anti-consensus attitudes about COVID-19 in studies 4 and 5. Additional strengths are that study 5 is a conceptual replication of study 4 (and studies 1 to 3 more generally) using different measures and operationalizations of the main constructs, conducted by an initially independent group of researchers (with each group unaware of the research of the other during study development and data collection). The final two studies were also collected approximately 2 months apart, in July and September 2020, respectively. These two collection periods reflect the dynamic nature of the COVID-19 pandemic in the United States, with cases in July trending upward and cases in September flat or trending downward. The consistency of our effects across these 2 months suggests that the pattern of results is fairly robust.

One possible interpretation of these relationships is that the people who appear to be overconfident in their knowledge and extreme in their opposition to the consensus are actually reporting their sense of understanding for a set of incorrect alternative facts not those of the scientific community. After all, nonscientific explanations and theories tend to be much simpler and less mechanistic than scientific ones.  As a result, participants could be reporting higher levels of understanding for what are, in fact, simpler interpretations. However, we believe that several elements of this research speak against this interpretation fully explaining the results. First, the battery of objective knowledge questions is sufficiently broad, simple, and removed (at first glance) from the corresponding scientific issues. For example, not knowing that “the skin is the largest organ in the human body” does not suggest that participants hold alternative views about how the human body works; it suggests the lack of real knowledge about the body. We also believe that it does not cue participants to the fact that the question is related to vaccination. 

Thursday, August 18, 2022

Dunning–Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence

Pennycook, G., Ross, R.M., Koehler, D.J. et al. 
Psychon Bull Rev 24, 1774–1784 (2017). 
https://doi.org/10.3758/s13423-017-1242-7

Abstract

The Dunning–Kruger effect refers to the observation that the incompetent are often ill-suited to recognize their incompetence. Here we investigated potential Dunning–Kruger effects in high-level reasoning and, in particular, focused on the relative effectiveness of metacognitive monitoring among particularly biased reasoners. Participants who made the greatest numbers of errors on the cognitive reflection test (CRT) overestimated their performance on this test by a factor of more than 3. Overestimation decreased as CRT performance increased, and those who scored particularly high underestimated their performance. Evidence for this type of systematic miscalibration was also found on a self-report measure of analytic-thinking disposition. Namely, genuinely nonanalytic participants (on the basis of CRT performance) overreported their “need for cognition” (NC), indicating that they were dispositionally analytic when their objective performance indicated otherwise. Furthermore, estimated CRT performance was just as strong a predictor of NC as was actual CRT performance. Our results provide evidence for Dunning–Kruger effects both in estimated performance on the CRT and in self-reported analytic-thinking disposition. These findings indicate that part of the reason why people are biased is that they are either unaware of or indifferent to their own bias.

General discussion

Our results provide empirical support for Dunning–Kruger effects in both estimates of reasoning performance and self-reported thinking disposition. Particularly intuitive individuals greatly overestimated their performance on the CRT—a tendency that diminished and eventually reversed among increasingly analytic individuals. Moreover, self-reported analytic-thinking disposition—as measured by the Ability and Engagement subscales of the NC scale—was just as strongly (if not more strongly) correlated with estimated CRT performance than with actual CRT performance. In addition, an analysis using an additional performance-based measure of analytic thinking—the heuristics-and-biases battery—revealed a systematic miscalibration of self-reported NC, wherein relatively intuitive individuals report that they are more analytic than is justified by their objective performance. Together, these findings indicate that participants who are low in analytic thinking (so-called “intuitive thinkers”) are at least somewhat unaware of (or unresponsive to) their propensity to rely on intuition in lieu of analytic thought during decision making. This conclusion is consistent with previous research that has suggested that the propensity to think analytically facilitates metacognitive monitoring during reasoning (Pennycook et al., 2015b; Thompson & Johnson, 2014). Those who are genuinely analytic are aware of the strengths and weaknesses of their reasoning, whereas those who are genuinely nonanalytic are perhaps best described as “happy fools” (De Neys et al., 2013).

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Monday, July 18, 2022

The One That Got Away: Overestimation of Forgone Alternatives as a Hidden Source of Regret

Feiler, D., & Müller-Trede, J. (2022).
Psychological Science, 33(2), 314–324.
https://doi.org/10.1177/09567976211032657

Abstract

Past research has established that observing the outcomes of forgone alternatives is an important driver of regret. In this research, we predicted and empirically corroborated a seemingly opposite result: Participants in our studies were more likely to experience regret when they did not observe a forgone outcome than when it was revealed. Our prediction drew on two theoretical observations. First, feelings of regret frequently stem from comparing a chosen option with one’s belief about what the forgone alternative would have been. Second, when there are many alternatives to choose from under uncertainty, the perceived attractiveness of the almost-chosen alternative tends to exceed its reality. In four preregistered studies (Ns = 800, 599, 150, and 197 adults), we found that participants predictably overestimated the forgone path, and this overestimation caused undue regret. We discuss the psychological implications of this hidden source of regret and reconcile the ostensible contradiction with past research.

Statement of Relevance

Reflecting on our past decisions can often make us feel regret. Previous research suggests that feelings of regret stem from comparing the outcome of our chosen path with that of the unchosen path.  We present a seemingly contradictory finding: Participants in our studies were more likely to experience regret when they did not observe the forgone outcome than when they saw it. This effect arises because when there are many paths to choose from, and uncertainty exists about how good each would be, people tend to overestimate the almost-chosen path. An idealized view of the path not taken then becomes an unfair standard of comparison for the chosen path, which inflates feelings of regret. Excessive regret has been found to be associated with depression and anxiety, and our work suggests that there may be a hidden source of undue regret—overestimation of forgone paths—that may contribute to these problems.

The ending...

Finally, is overestimating the paths we do not take causing us too much regret? Although regret can have
benefits for experiential learning, it is an inherently negative emotion and has been found to be associated with depression and excessive anxiety (Kocovski et al., 2005; Markman & Miller, 2006; Roese et al., 2009). Because the regret in our studies was driven by biased beliefs, it may be excessive—after all, better-calibrated beliefs about forgone alternatives would cause less regret. Whether calibrating beliefs about forgone alternatives could also help in alleviating regret’s harmful psychological consequences is an important question for future research.


Important implications for psychotherapy....

Thursday, July 14, 2022

What nudge theory got wrong

Tim Harford
The Financial Times
Originally posted 

Here is an excerpt:

Chater and Loewenstein argue that behavioural scientists naturally fall into the habit of seeing problems in the same way. Why don’t people have enough retirement savings? Because they are impatient and find it hard to save rather than spend. Why are so many greenhouse gases being emitted? Because it’s complex and tedious to switch to a green electricity tariff. If your problem is basically that fallible individuals are making bad choices, behavioural science is an excellent solution.

If, however, the real problem is not individual but systemic, then nudges are at best limited, and at worst, a harmful diversion. Historians such as Finis Dunaway now argue that the Crying Indian campaign was a deliberate attempt by corporate interests to change the subject. Is behavioural public policy, accidentally or deliberately, a similar distraction?

A look at climate change policy suggests it might be. Behavioural scientists themselves are clear enough that nudging is no real substitute for a carbon price — Thaler and Sunstein say as much in Nudge. Politicians, by contrast, have preferred to bypass the carbon price and move straight to the pain-free nudging.

Nudge enthusiast David Cameron, in a speech given shortly before he became prime minister, declared that “the best way to get someone to cut their electricity bill” was to cleverly reformat the bill itself. This is politics as the art of avoiding difficult decisions. No behavioural scientist would suggest that it was close to sufficient. Yet they must be careful not to become enablers of the One Weird Trick approach to making policy.

-------

Behavioural science has a laudable focus on rigorous evidence, yet even this can backfire. It is much easier to produce a quick randomised trial of bill reformatting than it is to evaluate anything systemic. These small quick wins are only worth having if they lead us towards, rather than away from, more difficult victories.

Another problem is that empirically tested, behaviourally rigorous bad policy can be bad policy nonetheless. For example, it has become fashionable to argue that people should be placed on an organ donor registry by default, because this dramatically expands the number of people registered as donors. But, as Thaler and Sunstein themselves keep having to explain, this is a bad idea. Most organ donation happens only after consultation with a grieving family — and default-bloated donor registries do not help families work out what their loved one might have wanted.


Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
TechCrunch
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.



Sunday, June 19, 2022

Anti-Black Racism as a Chronic Condition

Nneka Sederstrom and Tamika Lasege, 
In A Critical Moment in Bioethics: Reckoning 
with Anti-Black Racism through Intergenerational 
Dialogue,  ed.  Faith  E.  Fletcher  et  al., 
Special  Report, Hastings Center Report 52, no. 2 
(2022):  S24-S29.

Abstract

Because America has a foundation of anti-Black racism, being born Black in this nation yields an identity that breeds the consequences of a chronic condition. This article highlights several ways in which medicine and clinical ethics, despite the former's emphasis on doing no harm and the latter's emphasis on nonmaleficence, fail to address or acknowledge some of the key ways in which physicians can—and do—harm patients of color. To understand harm in a way that can provide real substance for ethical standards in the practice of medicine, physicians need to think about how treatment decisions are constrained by a patient's race. The color of one's skin can and does negatively affect the quality of a person's diagnosis, promoted care plan, and prognosis. Yet racism in medicine and bioethics persist—because a racist system serves the interests of the dominant caste, White people. As correctives to this system, the authors propose several antiracist commitments physicians or ethicists can make.

(cut)

Here are some commitments to add to a newly revised Hippocratic oath: We shall stop denying that racism exists in medicine. We shall face the reality that we fail to train and equip our clinicians with the ability to effectively make informed clinical decisions using the reality of how race impacts health outcomes. We shall address the lack of the declaration of racism as a bioethics priority and work to train ethicists on how to engage in antiracism work. We shall own the effects of racism at every level in health care and the academy. Attempting to talk about everything except racism is another form of denial, privilege, and power that sustains racism. We will not have conversations about disproportionally high rates of “minority” housing insecurity, food scarcity, noncompliance with treatment plans, “drug-seeking behavior,” complex social needs, or “disruptive behavior” or rely on any other terms that are disguised proxies for racism without explicitly naming racism. As ethicists, we will not engage in conversations around goal setting, value judgments, benefits and risks of interventions, autonomy and capacity, or any other elements around the care of patients without naming racism.

So where do we go from here? How do we address the need to decolonize medicine and bioethics? When do we stop being inactive and start being proactive? It starts upstream with improving the medical education and bioethics curricula to accurately and thoroughly inform students on the social and biological sciences of human beings who are not White in America. Then, and only then, will we breed a generation of race-conscious clinicians and ethicists who can understand and interpret the historic inequities in our system and ultimately be capable of providing medical care and ethical analysis that reflect the diversity of our country. Clinical ethics program development must include antiracism training to develop clinical ethicists who have the skills to recognize and address racism at the bedside in clinical ethics consultation. It requires changing the faces in the field and addressing the extreme lack of racial diversity in bioethics. Increasing the number of clinicians of color in all professions within medicine, but especially the numbers of physicians, advance practice providers, and clinical ethicists, is imperative to the goal of improving patient outcomes for Black and brown populations.

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Tuesday, June 14, 2022

Minority salience and the overestimation of individuals from minority groups in perception and memory

R. Kadosh, A. Y. Sklar, et al. 
PNAS (2022).
Vol 119 (12) 1-10.

Abstract

Our cognitive system is tuned toward spotting the uncommon and unexpected. We propose that individuals coming from minority groups are, by definition, just that—uncommon and often unexpected. Consequently, they are psychologically salient in perception, memory, and visual awareness. This minority salience creates a tendency to overestimate the prevalence of minorities, leading to an erroneous picture of our social environments—an illusion of diversity. In 12 experiments with 942 participants, we found evidence that the presence of minority group members is indeed overestimated in memory and perception and that masked images of minority group members are prioritized for visual awareness. These findings were consistent when participants were members of both the majority group and the minority group. Moreover, this overestimated prevalence of minorities led to decreased support for diversity-promoting policies. We discuss the theoretical implications of the illusion of diversity and how it may inform more equitable and inclusive decision-making.

Significance

Our minds are tuned to the uncommon or unexpected in our environment. In most environments, members of minority groups are just that—uncommon. Therefore, the cognitive system is tuned to spotting their presence. Our results indicate that individuals from minority groups are salient in perception, memory, and visual awareness. As a result, we consistently overestimate their presence—leading to an illusion of diversity: the environment seems to be more diverse than it actually is, decreasing our support for diversity-promoting measures. As we try to make equitable decisions, it is important that private individuals and decision-makers alike become aware of this biased perception. While these sorts of biases can be counteracted, one must first be aware of the bias.

Discussion

Taken together, our results from 12 experiments and 942 participants indicate that minority salience and overestimation are robust phenomena. We consistently overestimate the prevalence of individuals from minority groups and underestimate the prevalence of members from the majority group, thus perceiving our social environments as more diverse than they truly are. Our experiments also indicate that this effect maybe found at the level of priority for visual awareness and that it is social in nature: our social knowledge, our representation of the overall composition of our social environment, shapes this effect. Importantly, this illusion of diversity is consequential in that it leads to less support for measures to increase diversity.

Sunday, June 5, 2022

The death penalty: The past and uncertain future of executions in America

C. Geidner, J. Lambert & K. Philo
Grid News
Originally posted 28 APR 22

Overview

South Carolina may soon carry out the United States’ first executions by firing squad in more than a decade. State officials have said that they plan to execute Richard Moore and Brad Sigmon using guns, the first such use of a firing squad since Ronnie Gardner was shot to death by the state of Utah on June 18, 2010.

Last week, nine days before Moore was to be executed, South Carolina’s Supreme Court put the execution on hold, but there’s no way of knowing how long that will last. Days later, the court also put Sigmon’s execution — scheduled for May — on hold. Although the court did not explain its reasoning, both men have an ongoing challenge to the state’s execution protocol, including its planned use of a firing squad.

How did we get here?

More than 45 years after the Supreme Court allowed executions to resume in the United States after a four-year hiatus, America is in a monthlong period in which five states planned to carry out six executions — the most in several years.

The situation offers a window into changing attitudes toward the death penalty and the complex brew of factors that have made these executions harder to carry out but also harder to challenge in courts. And the individual stories behind some of these current cases serve as a reminder of the well-documented racial bias in the way death sentences are handed down.

The death penalty’s popularity with the public has diminished in recent decades, and the overall number of new death sentences and executions has dropped significantly.

That’s due in part to the increased difficulty of carrying out lethal injection executions after death penalty opponents made it substantially harder for states to obtain the necessary drugs. States responded in part by adopting untried drug combinations. A series of botched executions followed — including the longest execution in U.S. history, when Arizona spent nearly two hours trying to kill Joseph Wood by using 15 doses of its execution drugs on the man before he died.

During that same time, the Supreme Court has made it more difficult to challenge any method of execution, setting a high bar for a method to be disallowed and by requiring challengers to identify an alternative method of execution.

Robert Dunham, the executive director of the Death Penalty Information Center, a nonpartisan organization that maintains a comprehensive database of U.S. executions, told Grid that part of the current influx of execution dates is a result of most states halting executions during the first year of the pandemic, before a covid vaccine was available.

This past week, Texas carried out its first execution of the year when it executed 78-year-old Carl Buntion. Tennessee also had planned an execution for last week, but it was called off with an announcement that highlighted two key elements of the modern death penalty: secrecy and errors. Hours before the state was slated to execute Oscar Franklin Smith by lethal injection, Gov. Bill Lee (R), citing “an oversight in preparation for lethal injection,” announced a reprieve. The execution will not happen before June, but state officials have not yet said anything more about what led to the last-minute reprieve.

Monday, May 23, 2022

Recognizing and Dismantling Raciolinguistic Hierarchies in Latinx Health

Ortega, P., et al.
AMA J Ethics. 2022;24(4):E296-304.
doi: 10.1001/amajethics.2022.296.

Abstract

Latinx individuals represent a linguistically and racially diverse, growing US patient population. Raciolinguistics considers intersections of language and race, prioritizes lived experiences of non-English speakers, and can help clinicians more deftly conceptualize heterogeneity and complexity in Latinx health experiences. This article discusses how raciolinguistic hierarchies (ie, practices of attaching social value to some languages but not others) can undermine the quality of Latinx patients’ health experiences. This article also offers language-appropriate clinical and educational strategies for promoting health equity.

Raciolinguistics

Hispanic/Latinx (hereafter, Latinx) individuals in the United States represent a culturally, racially, and linguistically diverse and rapidly growing population. Attempting to categorize all Latinx individuals in a single homogeneous group may result in inappropriate stereotyping,1 inaccurate counting,2, 3 ineffective health interventions that insufficiently target at-risk subgroups,4 and suboptimal health communication.5 A more helpful approach is to use raciolinguistics to conceptualize the heterogeneous, complex Latinx experience as it relates to health. Raciolinguistics is the study of the historical and contemporary co-naturalization of race and language and their intertwining in the identities of individuals and communities. As an emerging field that grapples with the intersectionality of language and race, raciolinguistics provides a unique perspective on the lived experiences of people who speak non-English languages and people of color.6 As such, understanding raciolinguistics is relevant to providing language-concordant care7 to patients with limited English proficiency (LEP), who have been historically marginalized by structural barriers, racism, and other forms of discrimination in health care.

In this manuscript, we explore how raciolinguistics can help clinicians to appropriately conceptualize the heterogeneous, complex Latinx experience as it relates to health care. We then use the raciolinguistic perspective to inform strategies to dismantle structural barriers to health equity for Latinx patients pertaining to (1) Latinx patients’ health care experiences and (2) medical education.

(cut)

Conclusions

A raciolinguistic perspective can inform how health care practices and medical education should be critically examined to support Latinx populations comprising heterogeneous communities and complex individuals with varying and intersecting cultural, social, linguistic, racial, ancestral, spiritual, and other characteristics. Future studies should explore the outcomes of raciolinguistic reforms of health services and educational interventions across the health professions to ensure effectiveness in improving health care for Latinx patients.

Tuesday, May 17, 2022

Why it’s so damn hard to make AI fair and unbiased

Sigal Samuel
Vox.com
Originally posted 19 APR 2022

Here is an excerpt:

So what do big players in the tech space mean, really, when they say they care about making AI that’s fair and unbiased? Major organizations like Google, Microsoft, even the Department of Defense periodically release value statements signaling their commitment to these goals. But they tend to elide a fundamental reality: Even AI developers with the best intentions may face inherent trade-offs, where maximizing one type of fairness necessarily means sacrificing another.

The public can’t afford to ignore that conundrum. It’s a trap door beneath the technologies that are shaping our everyday lives, from lending algorithms to facial recognition. And there’s currently a policy vacuum when it comes to how companies should handle issues around fairness and bias.

“There are industries that are held accountable,” such as the pharmaceutical industry, said Timnit Gebru, a leading AI ethics researcher who was reportedly pushed out of Google in 2020 and who has since started a new institute for AI research. “Before you go to market, you have to prove to us that you don’t do X, Y, Z. There’s no such thing for these [tech] companies. So they can just put it out there.”

That makes it all the more important to understand — and potentially regulate — the algorithms that affect our lives. So let’s walk through three real-world examples to illustrate why fairness trade-offs arise, and then explore some possible solutions.

How would you decide who should get a loan?

Here’s another thought experiment. Let’s say you’re a bank officer, and part of your job is to give out loans. You use an algorithm to help you figure out whom you should loan money to, based on a predictive model — chiefly taking into account their FICO credit score — about how likely they are to repay. Most people with a FICO score above 600 get a loan; most of those below that score don’t.

One type of fairness, termed procedural fairness, would hold that an algorithm is fair if the procedure it uses to make decisions is fair. That means it would judge all applicants based on the same relevant facts, like their payment history; given the same set of facts, everyone will get the same treatment regardless of individual traits like race. By that measure, your algorithm is doing just fine.

But let’s say members of one racial group are statistically much more likely to have a FICO score above 600 and members of another are much less likely — a disparity that can have its roots in historical and policy inequities like redlining that your algorithm does nothing to take into account.

Another conception of fairness, known as distributive fairness, says that an algorithm is fair if it leads to fair outcomes. By this measure, your algorithm is failing, because its recommendations have a disparate impact on one racial group versus another.

Wednesday, May 11, 2022

Bias in mental health diagnosis gets in the way of treatment

Howard N. Garb
psyche.co
Originally posted 2 MAR 22

Here is an excerpt:

What about race-related bias? 

Research conducted in the US indicates that race bias is a serious problem for the diagnosis of adult mental disorders – including for the diagnosis of PTSD, depression and schizophrenia. Preliminary data also suggest that eating disorders are underdiagnosed in Black teens compared with white and Hispanic teens.

The misdiagnosis of PTSD can have significant economic consequences, in addition to its implications for treatment. In order for a US military veteran to receive disability compensation for PTSD from the Veterans Benefits Administration, a clinician has to diagnose the veteran. To learn if race bias is present in this process, a research team compared its own systematic diagnoses of veterans with diagnoses made by clinicians during disability exams. Though most clinicians will make accurate diagnoses, the research diagnoses can be considered more accurate, as the mental health professionals who made them were trained to adhere to diagnostic criteria and use extensive information. When veterans received a research diagnosis of PTSD, they should have also gotten a clinician’s diagnosis of PTSD – but this occurred only about 70 per cent of the time.

More troubling is that, in cases where research diagnoses of PTSD were made, Black veterans were less likely than white veterans to receive a clinician’s diagnosis of PTSD during their disability exams. There was one set of cases where bias was not evident, however. In roughly 25 per cent of the evaluations, clinicians administered a formal PTSD symptom checklist or a psychological test to help them make a diagnosis – and if this additional information was collected, race bias was not observed. This is an important finding. Clinicians will sometimes form a first impression of a patient’s condition and then ask questions that can confirm – but not refute – their subjective impression. By obtaining good-quality objective information, clinicians might be less inclined to depend on their subjective impressions alone.

Race bias has also been found for other forms of mental illness. Historically, research indicated that Black patients and sometimes Hispanic patients were more likely than white patients to be given incorrect diagnoses of schizophrenia, while white patients were more often given correct diagnoses of major depression and bipolar disorder. During the past 20 years, this appears to have changed somewhat, with the most accurate diagnoses being made for Latino patients, the least accurate for Black patients, and the results for white patients somewhere in between.

Wednesday, February 23, 2022

I See Color

Khama Ennis
On The Flip Side
Original date: February 13, 2020

9 minutes worth watching: Patient biases versus professional obligations

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Friday, December 31, 2021

Dear White People: Here Are 5 Uncomfortable Truths Black Colleagues Need You To Know

Dana Brownlee
Forbes.com
Originally posted 16 June 2020

While no one has a precise prescription for how to eradicate racial injustice in the workplace, I firmly believe that a critical first step is embracing the difficult conversations and uncomfortable truths that we’ve become too accustomed to avoiding. The baseline uncomfortable truth is that blacks and whites in corporate America often maintain their own subcultures – including very different informal conversations in the workplace - with surprisingly little overlap at times. To be perfectly honest, as a black woman who has worked in and around corporate America for nearly 30 years, I’ve typically only been privy to the black side of the conversation, but I think in this moment where everyone is looking for opportunities to either teach, learn or grow, it’s instructive if not necessary to break down the traditional siloes and speak the unspeakable. So in this vein I’m sharing five critical “truths” that I feel many black people in corporate settings would vehemently discuss in “private” but not necessarily assert in “public.”

Here are the 5, plus a bonus.

Truth #1 - Racism doesn’t just show up in its most extreme form. There is indeed a continuum (of racist thoughts and behaviors), and you may be on it.

Truth #2 – Even if you personally haven’t offended anyone (that you know of), you may indeed be part of the problem.

Truth #3 – Every black person on your team is not your “friend.”

Truth #4 – Gender and race discrimination are not “essentially the same.”

Truth #5 – Even though there may be one or two black faces in leadership, your organization may indeed have a rampant racial injustice problem.

Bonus Truth #6: You can absolutely be part of the solution.

As workplaces tackle racism with a renewed sense of urgency amidst the worldwide Black Lives Matter protests, it’s imperative that they approach the problem of racism as they would any other serious business problem – methodically, intensely and with a sense of urgency and conviction.