Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, May 25, 2024

AI Chatbots Will Never Stop Hallucinating

Lauren Leffer
Scientific American
Originally published 5 April 24

Here is an excerpt:

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

But today’s LLMs were never designed to be purely accurate. They were created to create—to generate—says Subbarao Kambhampati, a computer science professor who researches artificial intelligence at Arizona State University. “The reality is: there’s no way to guarantee the factuality of what is generated,” he explains, adding that all computer-generated “creativity is hallucination, to some extent.”


Here is my summary:

AI chatbots like ChatGPT and Bing's AI assistant frequently "hallucinate" - they generate false or misleading information and present it as fact. This is a major problem as more people turn to these AI tools for information, research, and decision-making.

Hallucinations occur because AI models are trained to predict the most likely next word or phrase, not to reason about truth and accuracy. They simply produce plausible-sounding responses, even if they are completely made up.

This issue is inherent to the current state of large language models and is not easily fixable. Researchers are working on ways to improve accuracy and reliability, but there will likely always be some rate of hallucination.

Hallucinations can have serious consequences when people rely on chatbots for sensitive information related to health, finance, or other high-stakes domains. Experts warn these tools should not be used where factual accuracy is critical.