Sharon Begley
statnews.com
Originally posted 17 June 20
Here is an excerpt:
All 13 of the algorithms Jones and his colleagues examined offered rationales for including race in a way that, presumably unintentionally, made Black and, in some cases, Latinx patients less likely to receive appropriate care. But when you trace those rationales back to their origins, Jones said, “you find outdated science or biased data,” such as simplistically concluding that poor outcomes for Black patients are due to race.
Typically, developers based their algorithms on studies showing a correlation between race and some medical outcome, assuming race explained or was even the cause of, say, a poorer outcome (from a vaginal birth after a cesarean, say). They generally did not examine whether factors that typically go along with race in the U.S., such as access to primary care or socioeconomic status or discrimination, might be the true drivers of the correlation.
“Modern tools of epidemiology and statistics could sort that out,” Jones said, “and show that much of what passes for race is actually about class and poverty.”
Including race in a clinical algorithm can sometimes be appropriate, Powers cautioned: “It could lead to better patient care or even be a tool for addressing inequities.” But it might also exacerbate inequities. Figuring out the algorithms’ consequences “requires taking a close look at how the algorithm was trained, the data used to make predictions, the accuracy of those predictions, and how the algorithm is used in practice,” Powers said. “Unfortunately, we don’t have these answers for many of the algorithms.”
The info is here.