Katie Palmer
STATnews.com
Originally posted 3 March 25
More than a decade ago, Ken Mandl was on a call with a pharmaceutical company and the leader of a social network for people with diabetes. The drug maker was hoping to use the platform to encourage its members to get a certain lab test.
The test could determine a patient's need for a helpful drug. But in that moment, said Mandl, director of the computational health informatics program at Boston Children's Hospital, "I could see this focus on a biomarker as a way to increase sales of the product." To describe the phenomenon, he coined the term "biomarkup": the way commercial interests can influence the creation, adoption, and interpretation of seemingly objective measures of medical status.
These days, Mandl has been thinking about how the next generation of quantified outputs in health could be gamed: artificial intelligence tools.
"It is easy to imagine a new generation of Al-based revenue cycle management model tools that achieve higher reimbursements by nudging clinicians toward more lucrative care pathways," Mandl wrote in a recent perspective in NEJM AI. "Al-based decision support interventions are vulnerable across their entire development life cycle and could be manipulated to favor specific products or services."
Here are some thoughts:
Dr. Ken Mandl raises a critical concern about the potential for "biomarkup" in the age of artificial intelligence within healthcare. This concept, initially describing how commercial interests can manipulate seemingly objective medical measures, now extends to AI tools. Mandl warns that AI-driven systems, designed for tasks like revenue cycle management or clinical decision support, could be subtly manipulated to prioritize financial gain over patient well-being. This manipulation might involve nudging clinicians towards more lucrative care pathways or tuning algorithms to generate more referrals, particularly in fee-for-service models. The issue is exacerbated in direct-to-consumer healthcare, where profit motives may be even stronger and regulatory oversight potentially weaker. The ease with which financial outcomes can be measured, compared to patient outcomes, further compounds the problem, creating a risk of AI implementation being driven primarily by return on investment. Mandl emphasizes the urgent need for transparency in AI decision frameworks, ethical development practices, and careful regulatory oversight to safeguard patient interests and ensure that AI serves its intended purpose of improving healthcare, not just increasing profits.