Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Hallucinations. Show all posts
Showing posts with label Hallucinations. Show all posts

Sunday, May 4, 2025

Navigating LLM Ethics: Advancements, Challenges, and Future Directions

Jiao, J., Afroogh, S., Xu, Y., & Phillips, C. (2024).
arXiv (Cornell University).

Abstract

This study addresses ethical issues surrounding Large Language Models (LLMs) within the field of artificial intelligence. It explores the common ethical challenges posed by both LLMs and other AI systems, such as privacy and fairness, as well as ethical challenges uniquely arising from LLMs. It highlights challenges such as hallucination, verifiable accountability, and decoding censorship complexity, which are unique to LLMs and distinct from those encountered in traditional AI systems. The study underscores the need to tackle these complexities to ensure accountability, reduce biases, and enhance transparency in the influential role that LLMs play in shaping information dissemination. It proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration. It recommends ethical frameworks tailored to specific domains and dynamic auditing systems adapted to diverse contexts. This roadmap aims to guide responsible development and integration of LLMs, envisioning a future where ethical considerations govern AI advancements in society.

Here are some thoughts:

This study examines the ethical issues surrounding Large Language Models (LLMs) within artificial intelligence, addressing both common ethical challenges shared with other AI systems, such as privacy and fairness, and the unique ethical challenges specific to LLMs.  The authors emphasize the distinct challenges posed by LLMs, including hallucination, verifiable accountability, and the complexities of decoding censorship.  The research underscores the importance of tackling these complexities to ensure accountability, reduce biases, and enhance transparency in how LLMs shape information dissemination.  It also proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration, ethical frameworks tailored to specific domains, and dynamic auditing systems adapted to diverse contexts, ultimately aiming to guide the responsible development and integration of LLMs. 

Friday, December 13, 2024

A Case of Artificial Intelligence Chatbot Hallucination

Colasacco, C. J., & Born, H. L. (2024).
JAMA Otolaryngology–Head & Neck Surgery,
150(6), 457.

Despite the number of potential benefits of artificial intelligence (AI) use, examples from various fields of study have demonstrated that it is not an infallible technology. Our recent experience with AI chatbot tools is not to be overlooked by medical practitioners who use AI for practice guidance. By sharing this experience, we aim to remind clinicians to use AI in conjunction with human experience as well as highlight the assistance AI can provide with creative tasks.

Senior author and otolaryngologist, (H.B.; henceforth, Dr Born) was recently using the chatbot Bard (Google LLC; date of use: November 2023)1 to assist in preparing a brief presentation on the frontiers of airway care in laryngology for a continuing medical education course. After helping Dr Born gather information for an outline, the chatbot was asked to provide citations for the information provided. To check the accuracy of the results, Dr Born searched Google Scholar, PubMed, and the cited journal’s website to find the cited reference; however, she was unable to locate the first citation that was provided: “Telemedicine for the management of airway stenosis. Expert Rev Respir Med. 2022;16(10):1485-1493,” and its digital object identifier (DOI) produced an error message, “DOI not found.”

Because Dr Born could not find the article, she asked the chatbot to summarize its content, an action the chatbot performed quickly. Subsequently, Dr Born asked the chatbot to provide the internet link to the article, and again, the same citation for the article was provided. Still unable to find the article online, Dr Born asked, “Does this paper actually exist?” The chatbot responded, “I apologize for the mistake. I have double-checked, and the paper ‘Telemedicine for the management of airway stenosis by Thomas et al (2022)’ does not exist. I am still under development, and I am always learning. I will try my best to avoid making similar mistakes in the future.” The chatbot’s acknowledgment of its imperfections can be commended, but the reliability of the answers provided by this rapidly emerging technology is open to question.


Here are some thoughts:

This article discusses the issue of AI hallucination in medical practice, specifically focusing on two instances where AI chatbots generated incorrect information. The authors highlight the importance of understanding the limitations of AI-powered chatbots and emphasize the need for careful fact-checking and critical evaluation of their output, even when used for research purposes. The authors conclude that, despite these limitations, AI can still be a valuable tool for generating new research ideas, as demonstrated by their own experience with AI-inspired research on the use of telemedicine for airway stenosis.

Monday, November 18, 2024

A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks

Hatem, R., Simmons, B., & Thornton, J. E. (2023).
Cureus, 15(9), e44720.

Abstract

Artificial intelligence (AI) has transformed society in many ways. AI in medicine has the potential to improve medical care and reduce healthcare professional burnout but we must be cautious of a phenomenon termed "AI hallucinations"and how this term can lead to the stigmatization of AI systems and persons who experience hallucinations. We believe the term "AI misinformation" to be more appropriate and avoids contributing to stigmatization. Healthcare professionals can play an important role in AI’s integration into medicine, especially regarding mental health services, so it is important that we continue to critically evaluate AI systems as they emerge.

The article is linked above.

Here are some thoughts:

In the rapidly evolving landscape of artificial intelligence, the phenomenon of AI inaccuracies—whether termed "hallucinations" or "misinformation"—represents a critical challenge that demands nuanced understanding and responsible management. While technological advancements are progressively reducing the frequency of these errors, with detection algorithms now capable of identifying inaccuracies with nearly 80% accuracy, the underlying issue remains complex and multifaceted.

The ethical implications of AI inaccuracies are profound, particularly in high-stakes domains like healthcare and legal services. Professionals must approach AI tools with a critical eye, understanding that these technologies are sophisticated assistants rather than infallible oracles. The responsibility lies not just with AI developers, but with users who must exercise judgment, validate outputs, and recognize the inherent limitations of current AI systems.

Ultimately, the journey toward more accurate AI is ongoing, requiring continuous learning, adaptation, and a commitment to ethical principles that prioritize human well-being and intellectual integrity. As AI becomes increasingly integrated into our professional and personal lives, our approach must be characterized by curiosity, critical thinking, and a deep respect for the complex interplay between human intelligence and artificial systems.

Saturday, May 25, 2024

AI Chatbots Will Never Stop Hallucinating

Lauren Leffer
Scientific American
Originally published 5 April 24

Here is an excerpt:

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

But today’s LLMs were never designed to be purely accurate. They were created to create—to generate—says Subbarao Kambhampati, a computer science professor who researches artificial intelligence at Arizona State University. “The reality is: there’s no way to guarantee the factuality of what is generated,” he explains, adding that all computer-generated “creativity is hallucination, to some extent.”


Here is my summary:

AI chatbots like ChatGPT and Bing's AI assistant frequently "hallucinate" - they generate false or misleading information and present it as fact. This is a major problem as more people turn to these AI tools for information, research, and decision-making.

Hallucinations occur because AI models are trained to predict the most likely next word or phrase, not to reason about truth and accuracy. They simply produce plausible-sounding responses, even if they are completely made up.

This issue is inherent to the current state of large language models and is not easily fixable. Researchers are working on ways to improve accuracy and reliability, but there will likely always be some rate of hallucination.

Hallucinations can have serious consequences when people rely on chatbots for sensitive information related to health, finance, or other high-stakes domains. Experts warn these tools should not be used where factual accuracy is critical.

Wednesday, December 15, 2021

Voice-hearing across the continuum: a phenomenology of spiritual voices

Moseley, P., et al. (2021, November 16).
https://doi.org/10.31234/osf.io/7z2at

Abstract

Voice-hearing in clinical and non-clinical groups has previously been compared using standardized assessments of psychotic experiences. Findings from several studies suggest that non-clinical voice-hearing (NCVH) is distinguished by reduced distress and increased control. However, symptom-rating scales developed for clinical populations may be limited in their ability to elucidate subtle and unique aspects of non-clinical voices. Moreover, such experiences often occur within specific contexts and systems of belief, such as spiritualism. This makes direct comparisons difficult to interpret. Here we present findings from a comparative interdisciplinary study which administered a semi-structured interview to NCVH individuals and psychosis patients. The non-clinical group were specifically recruited from spiritualist communities. The findings were consistent with previous results regarding distress and control, but also documented multiple modalities that were often integrated into a single entity, high levels of associated visual imagery, and subtle differences in the location of voices relating to perceptual boundaries. Most spiritual voice-hearers reported voices before encountering spiritualism, suggesting that their onset was not solely due to deliberate practice. Future research should aim to understand how spiritual voice-hearers cultivate and control voice-hearing after its onset, which may inform interventions for people with distressing voices.

From the Discussion

As has been reported in previous studies, the ability to exhibit control over or influence voices seems to be an important difference between experiences reported by clinical and non-clinical groups.  A key distinction here is between volitional control (ability to bring on or stop voices intentionally), and the ability to influence voices (through other strategies such as engagement or distraction from voices), referred to elsewhere as direct and in direct control.  In the present study, the spiritual group reported substantially higher levels of control and influence over voices, compared to patients. Importantly, nearly three-quarters of the group reported a change in their ability to influence the voices over time –compared to 12.5% of psychosis patients–suggesting that this ability is not always present from the onset of voice-hearing in non-clinical populations, and instead can be actively developed. Indeed, our analysis indicated that 88.5% of the spiritual group described their voices starting spontaneously, with 69.2% reporting that this was before they had contact with spiritualism itself. Thus, while most of the group (96.2%) reported ongoing cultivation of the voices, and often reported developing influence over time, it seems that spiritual practices mostly do not elicit the actual initial onset of the voices, instead playing a role in honing the experience. 

Saturday, August 24, 2019

Decoding the neuroscience of consciousness

Emily Sohn
Nature.com
Originally published July 24, 2019

Here is an excerpt:

That disconnect might also offer insight into why current medications for anxiety do not always work as well as people hope, LeDoux says. Developed through animal studies, these medications might target circuits in the amygdala and affect a person’s behaviours, such as their level of timidity — making it easier for them to go to social events. But such drugs don’t necessarily affect the conscious experience of fear, which suggests that future treatments might need to address both unconscious and conscious processes separately. “We can take a brain-based approach that sees these different kinds of symptoms as products of different circuits, and design therapies that target the different circuits systematically,” he says. “Turning down the volume doesn’t change the song — only its level.”

Psychiatric disorders are another area of interest for consciousness researchers, Lau says, on the basis that some mental-health conditions, including schizophrenia, obsessive–compulsive disorder and depression, might be caused by problems at the unconscious level — or even by conflicts between conscious and unconscious pathways. The link is only hypothetical so far, but Seth has been probing the neural basis of hallucinations with a ‘hallucination machine’ — a virtual-reality program that uses machine learning to simulate visual hallucinatory experiences in people with healthy brains. Through experiments, he and his colleagues have shown that these hallucinations resemble the types of visions that people experience while taking psychedelic drugs, which have increasingly been used as a tool to investigate the neural underpinnings of consciousness.

If researchers can uncover the mechanisms behind hallucinations, they might be able to manipulate the relevant areas of the brain and, in turn, treat the underlying cause of psychosis — rather than just address the symptoms. By demonstrating how easy it is to manipulate people’s perceptions, Seth adds, the work suggests that our sense of reality is just another facet of how we experience the world.

The info is here.

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Tuesday, August 30, 2016

An Alternative Form of Mental Health Care Gains a Foothold

By Benedict Carey
The New York Times
Originally published August 8, 2016

Here is an excerpt:

Dr. Chris Gordon, who directs a program with an approach to treating psychosis called Open Dialogue at Advocates in Framingham, Mass., calls the alternative approaches a “collaborative pathway to recovery and a paradigm shift in care.” The Open Dialogue approach involves a team of mental health specialists who visit homes and discuss the crisis with the affected person — without resorting to diagnostic labels or medication, at least in the beginning.

Some psychiatrists are wary, they say, given that medication can be life-changing for many people with mental problems, and rigorous research on these alternatives is scarce.'

The article is here.

Tuesday, August 5, 2014

When Hearing Voices Is a Good Thing

A new study suggests that schizophrenic people in more collectivist societies sometimes think their auditory hallucinations are helpful.

By Olga Khazan
The Atlantic
Originally posted July 23, 2014

Here are two excerpts:

But a new study suggests that the way schizophrenia sufferers experience those voices depends on their cultural context. Surprisingly, schizophrenic people from certain other countries don't hear the same vicious, dark voices that Holt and other Americans do. Some of them, in fact, think their hallucinations are good—and sometimes even magical.

(cut)

The Americans tended to described their voices as violent—"like torturing people, to take their eye out with a fork, or cut someone's head and drink their blood, really nasty stuff," according to the study.

The entire article is here.