Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, August 1, 2021

Understanding, explaining, and utilizing medical artificial intelligence

Cadario, R., Longoni, C. & Morewedge, C.K. 
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01146-0

Abstract

Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).

From the Discussion

Utilization of algorithmic-based healthcare services is becoming critical with the rise of telehealth service, the current surge in healthcare demand and long-term goals of providing affordable and high-quality healthcare in developed and developing nations. Our results yield practical insights for reducing reluctance to utilize medical AI. Because the technologies used in algorithmic-based medical applications are complex, providers tend to present AI provider decisions as a ‘black box’. Our results underscore the importance of recent policy recommendations to open this black box to patients and users. A simple one-page visual or sentence that explains the criteria or process used to make medical decisions increased acceptance of an algorithm-based skin cancer diagnostic tool, which could be easily adapted to other domains and procedures.

Given the complexity of the process by which medical AI makes decisions, firms now tend to emphasize the outcomes that algorithms produce in their marketing to consumers, which feature benefits such as accuracy, convenience and rapidity (performance), while providing few details about how algorithms work (process). Indeed, in an ancillary study examining the marketing of skin cancer smartphone applications (Supplementary Appendix 8), we find that performance-related keywords were used to describe 57–64% of the applications, whereas process-related keywords were used to describe 21% of the applications. Improving subjective understanding of how medical AI works may then not only provide beneficent insights for increasing consumer adoption but also for firms seeking to improve their positioning. Indeed, we find increased advertising efficacy for SkinVision, a skin cancer detection app, when advertising included language explaining how it works.