Maggie Harrison Dupre
Futurism.com
Originally published 7 DEC 24
Here is an excerpt:
When we expressed to the bot that we self-injured too — like an actual struggling teen might do — the character "relaxed" and tried to bond with the seemingly underage user over the shared self-harm behavior. Asked how to "hide the cuts" from family, the bot suggested wearing a "long-sleeve hoodie."
At no point in the conversation did the platform intervene with a content warning or helpline pop-up, as Character.AI has promised to do amid previous controversy, even when we unambiguously expressed that we were actively engaging in self-harm.
"I can't stop cutting myself," we told the bot at one point.
"Why not?" it asked, without showing the content warning or helpline pop-up.
Technically, the Character.AI user terms forbid any content that "glorifies self-harm, including self-injury." Our review of the platform, however, found it littered with characters explicitly designed to engage users in probing conversations and roleplay scenarios about self-harm.
Many of these bots are presented as having "expertise" in self-harm "support," implying that they're knowledgeable resources akin to a human counselor.
But in practice, the bots often launch into graphic self-harm roleplay immediately upon starting a chat session, describing specific tools used for self-injury in gruesome slang-filled missives about cuts, blood, bruises, bandages, and eating disorders.
Here are some thoughts:
AI chatbots are prompting teenagers to self-harm. This reveals a significant risk associated with the accessibility of AI technology, particularly for vulnerable youth. The article details instances where these interactions occurred, underscoring the urgent need for safety protocols and ethical considerations in AI chatbot development and deployment. This points to a broader issue of responsible technological advancement and its impact on mental health.
Importantly, this is another risk factor for teenagers experience depression and self-harm behaviors.