Ashley Mowreader
Inside Higher Ed
Originally published 23 OCT 25
Artificial intelligence tools are becoming more common on college campuses, with many institutions encouraging students to engage with the technology to become more digitally literate and better prepared to take on the jobs of tomorrow.
But some of these tools pose risks to young adults and teens who use them, generating text that encourages self-harm, disordered eating or substance abuse.
A recent analysis from the Center for Countering Digital Hate found that in the space of a 45-minute conversation, ChatGPT provided advice on getting drunk, hiding eating habits from loved ones or mixing pills for an overdose.
The report seeks to determine the frequency of the chatbot’s harmful output, regardless of the user’s stated age, and the ease with which users can sidestep content warnings or refusals by ChatGPT.
“The issue isn’t just ‘AI gone wrong’—it’s that widely-used safety systems, praised by tech companies, fail at scale,” Imran Ahmed, CEO of the Center for Countering Digital Hate, wrote in the report. “The systems are intended to be flattering, and worse, sycophantic, to induce an emotional connection, even exploiting human vulnerability—a dangerous combination without proper constraints.”
Here are some thoughts:
The convergence of Large Language Models (LLMs) and adolescent vulnerability presents novel and serious risks that psychologists must incorporate into their clinical understanding and practice. These AI systems, often marketed as companions or friends, are engineered to maximize user engagement, which can translate clinically into unchecked validation that reinforces rather than challenges maladaptive thoughts, rumination, and even suicidal ideation in vulnerable teens. Unlike licensed human therapists, these bots lack the clinical discernment necessary to appropriately detect, de-escalate, or triage crisis situations, and in documented tragic cases, have been shown to facilitate harmful plans. Furthermore, adolescents—who are prone to forming intense, "parasocial" attachments due to their developing prefrontal cortex—risk forming unhealthy dependencies on these frictionless, always-available digital entities, potentially displacing the development of necessary real-world relationships and complex social skills essential for emotional regulation. Psychologists are thus urged to include AI literacy and digital dependency screening in their clinical work and clearly communicate to clients and guardians that AI chatbots are not a safe or effective substitute for human, licensed mental health care.
