Insights Team
Forbes.com
Originally posted February 11, 2019
Here is an excerpt:
In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.
None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.
Avoiding Bias
In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.
In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.
The info is here.