Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, February 8, 2023

AI in the hands of imperfect users

Kostick-Quenet, K.M., Gerke, S. 
npj Digit. Med. 5, 197 (2022). 
https://doi.org/10.1038/s41746-022-00737-z

Abstract

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

(cut)

Impacts of uncertainty and urgency on decision quality

Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation or gene therapies) that lack a long history of observed outcomes. As Wang and Busemeyer (2021) describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases. Individuals might defer to an authority figure (messenger bias, authority bias); they may look to see what others are doing (“bandwagon” and social norm effects); or may make affective forecasting errors, projecting current emotional states onto one’s future self. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default, and loss aversion (weighing losses more heavily than gains of the same magnitude). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available.

(cut)

Conclusion

We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings. We advance this proposal by drawing attention to the need to empirically identify and test how specific user biases and decision contexts shape how AI tools are used in practice and influence patient outcomes. We propose that VSD can be used to strategize human-machine interfaces in ways that encourage critical reflection, mitigate bias, and reduce overreliance on AI systems in clinical decision making. We believe this approach can help to reduce some of the burdens on physicians to figure out on their own (with only basic training or knowledge about AI) the optimal role of AI tools in decision making by embedding a degree of bias mitigation directly into AI systems and interfaces.