Katie Palmer
STATNews.com
Originally posted 11 Sept 24
Here is an excerpt:
In the past four years, clinical medicine has been forced to reckon with the role of race in simpler iterations of these algorithms. Common calculators, used by doctors to inform care decisions, sometimes adjust their predictions depending on a patient’s race — perpetuating the false idea that race is a biological construct, not a social one.
Machine learning techniques could chart a path forward. They could allow clinical researchers to crunch reams of real-world patient records to deliver more nuanced predictions about health risks, obviating the need to rely on race as a crude — and sometimes harmful — proxy. But what happens, Gallifant asked his table of students, if that real-world data is tainted, unreliable? What happens to patients if researchers train their high-powered algorithms on data from biased tools like the pulse oximeter?
Over the weekend, Celi’s team of volunteer clinicians and data scientists explained, they’d go hunting for that embedded bias in a massive open-source clinical dataset, the first step to make sure it doesn’t influence clinical algorithms that impact patient care. The pulse oximeter continued to make the rounds to a student named Ady Suy — who, some day, wants to care for people whose concerns might be ignored, as a nurse or a pediatrician. “I’ve known people that didn’t get the care that they needed,” she said. “And I just really want to change that.”
At Brown and in events like this around the world, Celi and his team have been priming medicine’s next cohort of researchers and clinicians to cross-examine the data they intend to use. As scientists and regulators sound alarm bells about the risks of novel artificial intelligence, Celi believes the most alarming thing about AI isn’t its newness: It’s that it repeats an age-old mistake in medicine, continuing to use flawed, incomplete data to make decisions about patients.
“The data that we use to build AI reflects everything about the systems that we would like to disrupt,” said Celi: “Both the good and the bad.” And without action, AI stands to cement bias into the health care system at disquieting speed and scale.
Here are some thoughts:
In a recent event at Brown University, physician and data scientist Leo Celi led a workshop aimed at educating high school students and medical trainees about the biases present in medical data, particularly concerning the use of pulse oximeters, which often provide inaccurate readings for patients with darker skin tones. Celi emphasized the importance of addressing these biases as machine learning algorithms increasingly influence patient care decisions. The workshop involved hands-on activities where participants analyzed a large clinical dataset to identify embedded biases that could affect algorithmic predictions. Celi and his team highlighted the need for future researchers to critically examine the data they use, as flawed data can perpetuate existing inequities in healthcare. The event underscored the urgent need for diverse perspectives in AI development to ensure algorithms are fair and equitable, as well as the importance of improving data collection methods to better represent marginalized groups.