Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, November 2, 2024

Medical AI Caught Telling Dangerous Lie About Patient's Medical Record

Victor Tangerman
Futurism.com
Originally posted 28 Sept 24

Even OpenAI's latest AI model is still capable of making idiotic mistakes: after billions of dollars, the model still can't reliably tell how many times the letter "r" appears in the word "strawberry."

And while "hallucinations" — a conveniently anthropomorphizing word used by AI companies to denote bullshit dreamed up by their AI chatbots — aren't a huge deal when, say, a student gets caught with wrong answers in their assignment, the stakes are a lot higher when it comes to medical advice.

A communications platform called MyChart sees hundreds of thousands of messages being exchanged between doctors and patients a day, and the company recently added a new AI-powered feature that automatically drafts replies to patients' questions on behalf of doctors and assistants.

As the New York Times reports, roughly 15,000 doctors are already making use of the feature, despite the possibility of the AI introducing potentially dangerous errors.

Case in point, UNC Health family medicine doctor Vinay Reddy told the NYT that an AI-generated draft message reassured one of his patients that she had gotten a hepatitis B vaccine — despite never having access to her vaccination records.

Worse yet, the new MyChart tool isn't required to divulge that a given response was written by an AI. That could make it nearly impossible for patients to realize that they were given medical advice by an algorithm.


Here are some thoughts:

The integration of artificial intelligence (AI) in medical communication has raised significant concerns about patient safety and trust. Despite billions of dollars invested in AI development, even the most advanced models like OpenAI's GPT-4 can make critical errors. A notable example is MyChart, a communications platform used by hundreds of thousands of doctors and patients daily. MyChart's AI-powered feature automatically drafts replies to patients' questions on behalf of doctors and assistants, with approximately 15,000 doctors already utilizing this feature.

However, this technology poses significant risks. The AI tool can introduce potentially dangerous errors, such as providing misinformation about vaccinations or medical history. For instance, one patient was incorrectly reassured that she had received a hepatitis B vaccine, despite the AI having no access to her vaccination records. Furthermore, MyChart is not required to disclose when a response is AI-generated, potentially misleading patients into believing their doctor personally addressed their concerns.

Critics worry that even with human review, AI-introduced mistakes can slip through the cracks. Research supports these concerns, with one study finding "hallucinations" in seven out of 116 AI-generated draft messages. Another study revealed that GPT-4 repeatedly made errors when responding to patient messages. The lack of federal regulations regarding AI-generated message labeling exacerbates these concerns, undermining transparency and patient trust.