Colasacco, C. J., & Born, H. L. (2024).
JAMA Otolaryngology–Head & Neck Surgery,
150(6), 457.
Despite the number of potential benefits of artificial intelligence (AI) use, examples from various fields of study have demonstrated that it is not an infallible technology. Our recent experience with AI chatbot tools is not to be overlooked by medical practitioners who use AI for practice guidance. By sharing this experience, we aim to remind clinicians to use AI in conjunction with human experience as well as highlight the assistance AI can provide with creative tasks.
Senior author and otolaryngologist, (H.B.; henceforth, Dr Born) was recently using the chatbot Bard (Google LLC; date of use: November 2023)1 to assist in preparing a brief presentation on the frontiers of airway care in laryngology for a continuing medical education course. After helping Dr Born gather information for an outline, the chatbot was asked to provide citations for the information provided. To check the accuracy of the results, Dr Born searched Google Scholar, PubMed, and the cited journal’s website to find the cited reference; however, she was unable to locate the first citation that was provided: “Telemedicine for the management of airway stenosis. Expert Rev Respir Med. 2022;16(10):1485-1493,” and its digital object identifier (DOI) produced an error message, “DOI not found.”
Because Dr Born could not find the article, she asked the chatbot to summarize its content, an action the chatbot performed quickly. Subsequently, Dr Born asked the chatbot to provide the internet link to the article, and again, the same citation for the article was provided. Still unable to find the article online, Dr Born asked, “Does this paper actually exist?” The chatbot responded, “I apologize for the mistake. I have double-checked, and the paper ‘Telemedicine for the management of airway stenosis by Thomas et al (2022)’ does not exist. I am still under development, and I am always learning. I will try my best to avoid making similar mistakes in the future.” The chatbot’s acknowledgment of its imperfections can be commended, but the reliability of the answers provided by this rapidly emerging technology is open to question.
Here are some thoughts:
This article discusses the issue of AI hallucination in medical practice, specifically focusing on two instances where AI chatbots generated incorrect information. The authors highlight the importance of understanding the limitations of AI-powered chatbots and emphasize the need for careful fact-checking and critical evaluation of their output, even when used for research purposes. The authors conclude that, despite these limitations, AI can still be a valuable tool for generating new research ideas, as demonstrated by their own experience with AI-inspired research on the use of telemedicine for airway stenosis.