Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Debiasing. Show all posts
Showing posts with label Debiasing. Show all posts

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Saturday, August 6, 2022

A General Model of Cognitive Bias in Human Judgment and Systematic Review Specific to Forensic Mental Health

Neal, T. M. S., Lienert, P., Denne, E., & 
Singh, J. P. (2022).  
Law and Human Behavior, 46(2), 99–120.
https://doi.org/10.1037/lhb0000482

Abstract

Cognitive biases can impact experts’ judgments and decisions. We offer a broad descriptive model of how bias affects human judgment. Although studies have explored the role of cognitive biases and debiasing techniques in forensic mental health, we conducted the first systematic review to identify, evaluate, and summarize the findings. Hypotheses. Given the exploratory nature of this review, we did not test formal hypotheses. General research questions included the proportion of studies focusing on cognitive biases and/or debiasing, the research methods applied, the cognitive biases and debiasing strategies empirically studied in the forensic context, their effects on forensic mental health decisions, and effect sizes.

Public Significance Statement

Evidence of bias in forensic mental health emerged in ways consistent with what we know about human judgment broadly. We know less about how to debias judgments—an important frontier for future research. Better understanding how bias works and developing effective debiasing strategies tailored to the forensic mental health context hold promise for improving quality. Until then, we can use what we know now to limit bias in our work.

From the Discussion section

Is Bias a Problem for the Field of Forensic Mental Health?

Our interpretation of the judgment and decision-making literature more broadly, as well as the results from this systematic review conducted in this specific context, is that bias is an issue that deserves attention in forensic mental health—with some nuance. The overall assertion that bias is worthy of concern in forensic mental health rests both on the broader and the more specific literatures we reference here.

The broader literature is robust, revealing that well-studied biases affect human judgment and social cognition (e.g., Gilovich et al., 2002; Kahneman, 2011; see Figure 1). Although the field is robust in terms of individual studies demonstrating cognitive biases, decision science needs a credible, scientific organization of the various types of cognitive biases that have proliferated to better situate and organize the field. Even in the apparent absence of such an organizational structure, it is clear that biases influence consequential judgments not just for laypeople but for experts too, such as pilots (e.g., Walmsley & Gilbey, 2016), intelligence analysts (e.g., Reyna et al., 2014), doctors (e.g., Drew et al., 2013), and judges and lawyers (e.g., Englich et al., 2006; Girvan et al., 2015; Rachlinski et al., 2009). Given that forensic mental health experts are human, as are these other experts who demonstrate typical biases by virtue of being human, there is no reason to believe that forensic experts have automatic special protection against bias by virtue of their expertise.

Friday, September 24, 2021

Hanlon’s Razor

N. Ballantyne and P. H. Ditto
Midwest Studies in Philosophy
August 2021

Abstract

“Never attribute to malice that which is adequately explained by stupidity” – so says Hanlon’s Razor. This principle is designed to curb the human tendency toward explaining other people’s behavior by moralizing it. In this article, we ask whether Hanlon’s Razor is good or bad advice. After offering a nuanced interpretation of the principle, we critically evaluate two strategies purporting to show it is good advice. Our discussion highlights important, unsettled questions about an idea that has the potential to infuse greater humility and civility into discourse and debate.

From the Conclusion

Is Hanlon’s Razor good or bad advice? In this essay, we criticized two proposals in favor of the Razor.  One sees the benefits of the principle in terms of making us more accurate. The other sees benefits in terms of making us more charitable. Our discussion has been preliminary, but we hope careful empirical investigation can illuminate when and why the Razor is beneficial, if it is. For the time being, what else can we say about the Razor?

The Razor attempts to address the problem of detecting facts that explain opponents’ mistakes. Why do our opponents screw up? For hypermoralists, detecting stupidity in the noise of malice can be difficult: we are too eager to attribute bad motives and unsavory character to people who disagree with us. When we try to explain their mistakes, we are subject to two distinct errors:

Misidentifying-stupidity error: attributing an error to malice that is due to stupidity

Misidentifying-malice error: attributing an error to stupidity that is due to malice 

The idea driving the Razor is simple enough. People make misidentifying-stupidity errors too frequently and they should minimize those errors by risking misidentifying-malice errors. The Razor attempts to adjust our criterion for detecting the source of opponents’ mistakes. People should see stupidity more often in their opponents, even if that means they sometimes see stupidity where there is in fact malice. 

Tuesday, May 18, 2021

Moderators of The Liking Bias in Judgments of Moral Character

Bocian, K. Baryla, W. & Wojciszke, B. 
Personality and Social Psychology Bulletin. 
(2021)

Abstract 

Previous research found evidence for a liking bias in moral character judgments because judgments of liked people are higher than those of disliked or neutral ones. The present article sought conditions moderating this effect. In Study 1 (N = 792), the impact of the liking bias on moral character judgments was strongly attenuated when participants were educated that attitudes bias moral judgments. In Study 2 (N = 376), the influence of liking on moral character attributions was eliminated when participants were accountable for the justification of their moral judgments. Overall, these results suggest that even though liking biases moral character attributions, this bias might be reduced or eliminated when deeper information processing is required to generate judgments of others’ moral character. Keywords: moral judgments, moral character, attitudes, liking bias, accountability.

General Discussion

In this research, we sought to replicate the past results that demonstrated the influence of liking on moral character judgments, and we investigated conditions that could limit this influence. We demonstrated that liking elicited by similarity (Study 1) and mimicry (Study 2) biases the perceptions of another person’s moral character. Thus, we corroborated previous findings by Bocian et al. (2018), who found that attitudes bias moral judgments. More importantly, we showed conditions that moderate the liking bias. Specifically, in Study 1, we found evidence that forewarning participants that liking can bias moral character judgments weaken the liking bias two times. In Study 2, we demonstrated that the liking bias was eliminated when we made participants accountable for their moral decisions. 

By systematically examining the conditions that reduce the liking influences on moral character attributions, we built on and extended the past work in the area of moral cognition and biases reduction. First, while past studies have focused on the impact of accountability on the fundamental attribution error (Tetlock, 1985), overconfidence (Tetlock & Kim, 1987), or order of information (Schadewald & Limberg, 1992), we examined the effectiveness of accountability in debiasing moral judgments. Thus, we demonstrated that biased moral judgments could be effectively corrected when people are obliged to justify their judgments to others. Second, we showed that educating people that attitudes might bias their moral judgments, to some extent, effectively helped them debiased their moral character judgments. We thus extended the past research on the effectiveness of forewarning people of biases in social judgment and decision-making (Axt et al., 2018; Hershberger et al., 1997) to biases in moral judgments. 

Sunday, April 25, 2021

Training for Wisdom: The Distanced-Self-Reflection Diary Method

Grossmann, I., et al.  (2019, May 8). 
Psychological Science. 2021;32(3):381-394. 
doi:10.1177/0956797620969170

Abstract

Two pre-registered longitudinal experiments (Study 1: Canadians/Study 2: Americans and Canadians; N=555) tested the utility of illeism—a practice of referring to oneself in the third person—during diary-reflection for the trainability of wisdom-related characteristics in everyday life: emotional complexity (Study 1) and wise reasoning (intellectual humility, open-mindedness about how situations could unfold, consideration of and attempts to integrate diverse viewpoints; Studies 1-2). In a month-long experiment, instruction to engage in third- (vs. first-) person diary-reflections on most significant daily experiences resulted in growth in wise reasoning and emotional complexity assessed in laboratory sessions after vs. before the intervention. Additionally, third- (vs. first-) person participants showed alignment between forecasted and month-later experienced feelings toward close others in challenging situations. Study 2 replicated the third-person self-reflections effect on wise reasoning (vs. first-person- and no-pronoun-controls) in a week-long intervention. The present research demonstrates a path to evidence-based training of wisdom-related processes.

General Discussion

Two interventions demonstrated the effectiveness of distanced self-reflection for promoting wiser reasoning about interpersonal challenges, relative to control conditions. The effect of using distanced self-reflection on wise reasoning was in part statistically accounted for by a corresponding broadening of people’s habitually narrow self-focus into a more expansive sense of self (Aron & Aron, 1997). Distanced self-reflection effects were particularly pronounced for intellectual humility and social-cognitive aspects of wise reasoning (i.e., acknowledgement of others’ perspectives, search for conflict resolution). This project provides the first evidence that wisdom-related cognitive processes can be fostered in daily life. The results suggest that distanced self-reflections in daily diaries may cultivate wiser reasoning about challenging social interactions by promoting spontaneous self-distancing (Ayduk & Kross, 2010).

Saturday, April 24, 2021

Bias Blind Spot: Structure, Measurement, and Consequences

Irene Scopelliti,  et al.
Management Science 61(10):
2468-2486.

Abstract

People exhibit a bias blind spot: they are less likely to detect bias in themselves than in others. We report the development and validation of an instrument to measure individual differences in the propensity to exhibit the bias blind spot that is unidimensional, internally consistent, has high test-retest reliability, and is discriminated from measures of intelligence, decision-making ability, and personality traits related to self-esteem, self-enhancement, and self-presentation. The scale is predictive of the extent to which people judge their abilities to be better than average for easy tasks and worse than average for difficult tasks, ignore the advice of others, and are responsive to an intervention designed to mitigate a different judgmental bias. These results suggest that the bias blind spot is a distinct metabias resulting from naïve realism rather than other forms of egocentric cognition, and has unique effects on judgment and behavior.

Conclusion

We find that bias blind spot is a latent factor in self-assessments of relative vulnerability to bias. This meta-bias affected the majority of participants in our samples, but exhibited considerable variance across
participants. We present a concise, reliable, and valid measure of individual differences in bias blind spot
that has the ability to predict related biases in self-assessment, advice taking, and responsiveness to bias
reduction training. Given the influence of bias blind spot on consequential judgments and decisions, as
well as receptivity to training, this measure may prove useful across a broad range of domains such as personnel assessment, information analysis, negotiation, consumer decision making, and education.

Thursday, March 11, 2021

Decision making can be improved through observational learning

Yoon, H., Scopelliti, I. & Morewedge, C.
Organizational Behavior and 
Human Decision Processes
Volume 162, January 2021, 
Pages 155-188

Abstract

Observational learning can debias judgment and decision making. One-shot observational learning-based training interventions (akin to “hot seating”) can produce reductions in cognitive biases in the laboratory (i.e., anchoring, representativeness, and social projection), and successfully teach a decision rule that increases advice taking in a weight on advice paradigm (i.e., the averaging principle). These interventions improve judgment, rule learning, and advice taking more than practice. We find observational learning-based interventions can be as effective as information-based interventions. Their effects are additive for advice taking, and for accuracy when advice is algorithmically optimized. As found in the organizational learning literature, explicit knowledge transferred through information appears to reduce the stickiness of tacit knowledge transferred through observational learning. Moreover, observational learning appears to be a unique debiasing training strategy, an addition to the four proposed by Fischhoff (1982). We also report new scales measuring individual differences in anchoring, representativeness heuristics, and social projection.

Highlights

• Observational learning training interventions improved judgment and decision making.

• OL interventions reduced anchoring bias, representativeness, and social projection.

• Observational learning training interventions increased advice taking.

• Observational learning and information complementarily taught a decision rule.

• We provide new bias scales for anchoring, representativeness, and social projection.

Sunday, November 1, 2020

Believing in Overcoming Cognitive Biases

T. S. Doherty & A. E. Carroll
AMA J Ethics. 2020;22(9):E773-778. 
doi: 10.1001/amajethics.2020.773.

Abstract

Like all humans, health professionals are subject to cognitive biases that can render diagnoses and treatment decisions vulnerable to error. Learning effective debiasing strategies and cultivating awareness of confirmation, anchoring, and outcomes biases and the affect heuristic, among others, and their effects on clinical decision making should be prioritized in all stages of education.

Here is an excerpt:

The practice of reflection reinforces behaviors that reduce bias in complex situations. A 2016 systematic review of cognitive intervention studies found that guided reflection interventions were associated with the most consistent success in improving diagnostic reasoning. A guided reflection intervention involves searching for and being open to alternative diagnoses and willingness to engage in thoughtful and effortful reasoning and reflection on one’s own conclusions, all with supportive feedback or challenge from a mentor.

The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes. These strategies involve conscious consideration of alternative diagnoses other than those that come intuitively. One example involves reading radiographs in the emergency department. According to studies, a common pitfall among inexperienced clinicians in such a situation is to call off the search once a positive finding has been noticed, which often leads to other abnormalities (eg, second fractures) being overlooked. Thus, the forcing strategy in this situation would be to continue a search even after an initial fracture has been detected.

Thursday, October 20, 2016

Cognitive biases can affect moral intuitions about cognitive enhancement

Lucius Caviola, Adriano Mannino, Julian Savulescu and Nadira Faulmüller
Frontiers in Systems Neuroscience. 2014; 8: 195.
Published online 2014 Oct 15.

Abstract

Research into cognitive biases that impair human judgment has mostly been applied to the area of economic decision-making. Ethical decision-making has been comparatively neglected. Since ethical decisions often involve very high individual as well as collective stakes, analyzing how cognitive biases affect them can be expected to yield important results. In this theoretical article, we consider the ethical debate about cognitive enhancement (CE) and suggest a number of cognitive biases that are likely to affect moral intuitions and judgments about CE: status quo bias, loss aversion, risk aversion, omission bias, scope insensitivity, nature bias, and optimistic bias. We find that there are more well-documented biases that are likely to cause irrational aversion to CE than biases in the opposite direction. This suggests that common attitudes about CE are predominantly negatively biased. Within this new perspective, we hope that subsequent research will be able to elaborate this hypothesis and develop effective de-biasing techniques that can help increase the rationality of the public CE debate and thus improve our ethical decision-making.

The article is here.

Wednesday, July 22, 2015

Bias Blind Spot: Structure, Measurement, and Consequences

Irene Scopelliti, Carey K. Morewedge, Erin McCormick, H. Lauren Min, Sophie Lebrecht, Karim S. Kassam (2015)
Bias Blind Spot: Structure, Measurement, and Consequences. Management Science
Published online in Articles in Advance 24 Apr 2015
http://dx.doi.org/10.1287/mnsc.2014.2096

Abstract

People exhibit a bias blind spot: they are less likely to detect bias in themselves than in others. We report the development and validation of an instrument to measure individual differences in the propensity to exhibit the bias blind spot that is unidimensional, internally consistent, has high test-retest reliability, and is discriminated from measures of intelligence, decision-making ability, and personality traits related to self-esteem, self-enhancement, and self-presentation. The scale is predictive of the extent to which people judge their abilities to be better than average for easy tasks and worse than average for difficult tasks, ignore the advice of others, and are responsive to an intervention designed to mitigate a different judgmental bias. These results suggest that the bias blind spot is a distinct metabias resulting from naïve realism rather than other forms of egocentric
cognition, and has unique effects on judgment and behavior.

The entire article is here.