Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.