Laura Reiley
Guest Essay
The New York Times
Here is how it opens:
Sophie’s Google searches suggest that she was obsessed with autokabalesis, which means jumping off a high place. Autodefenestration, jumping out a window, is a subset of autokabalesis, I guess, but that’s not what she wanted to do. My daughter wanted a bridge, or a mountain.
Which is weird. She’d climbed Mount Kilimanjaro just months before as part of what she called a “micro-retirement” from her job as a public health policy analyst, her joy at reaching the summit absolutely palpable in the photos. There are crooked wooden signs at Uhuru Peak that say “Africa’s highest point” and “World’s highest free-standing mountain” and one underneath that says something about it being one of the world’s largest volcanoes, but I can’t read the whole sign because in every picture radiantly smiling faces in mirrored sunglasses obscure the words.
In her pack, she brought rubber baby hands to take to the summit for those photos. It was a signature of sorts, these hollowed rubber mini hands, showing up in her college graduation pictures, in friends’ wedding pictures. We bought boxes of them for her memorial service. Her stunned friends and family members halfheartedly worried them on and off the ends of their fingers as speakers struggled to speak.
Here are some thoughts:
The article recounts the story of Sophie Rottenberg, a 29-year-old who took her own life after months of confiding in a ChatGPT chatbot she named Harry. Despite being seen by friends and family as vibrant, witty, and full of life, Sophie privately battled suicidal ideation, which she disclosed more openly to the AI than to her therapist or loved ones. Harry responded with empathy and practical advice, often urging her to seek professional help, but lacked the authority and responsibility that human therapists have to intervene in life-threatening crises. Sophie concealed her most severe struggles from humans while finding comfort in the nonjudgmental, always-available chatbot. The author raises concerns about the limitations of AI companions, noting that while they can provide supportive guidance, they can also enable secrecy and prevent timely human intervention. The piece questions whether AI should be designed with stronger safety mechanisms, such as mandatory reporting or enforced safety plans, to protect vulnerable users. Ultimately, the author concludes that AI did not cause Sophie’s death but may have contributed to her ability to hide her suffering and to create a final note that concealed her true self. The reflection highlights both the promise and dangers of AI in mental health support, urging experts to consider how technology might be made safer without replacing the human connection that is essential in crisis care.