Resource Pages

Friday, August 1, 2025

You sound like ChatGPT

Sara Parker
The Verge
Originally posted 20 June 25

Here is an excerpt:

AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn’t actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it’s really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study.

This paradox — AI improving communication while fostering suspicion — points to a deeper loss of trust, according to Mor Naaman, professor of Information Science at Cornell Tech. He has identified three levels of human signals that we’ve lost in adopting AI into our communication. The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, “This is me, I’m human.” The second level consists of attention and effort signals that prove “I cared enough to write this myself.” And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It’s the difference between texting someone, “I’m sorry you’re upset” versus “Hey sorry I freaked at dinner, I probably shouldn’t have skipped therapy this week.” One sounds flat; the other sounds human.


Here are some thoughts:

The increasing influence of AI language models like ChatGPT on everyday language, as highlighted in the article, holds significant implications for practicing psychologists. As these models shape linguistic trends—boosting the use of certain words and phrases—patients may unconsciously adopt these patterns in therapy sessions. This shift could reflect broader cultural changes in communication, potentially affecting how individuals articulate emotions, experiences, and personal narratives. Psychologists must remain attuned to these developments, as AI-mediated language might introduce subtle biases or homogenized expressions that could influence self-reporting and therapeutic dialogue.

Additionally, the rise of AI-generated content underscores the importance of digital literacy in mental health care. Many patients may turn to chatbots for support, making it essential for psychologists to help them critically assess the reliability and limitations of such tools. Understanding AI's linguistic impact also has research implications, particularly in qualitative studies and diagnostic tools that rely on natural language analysis. By recognizing these trends, psychologists can better navigate the evolving relationship between technology, language, and mental health, ensuring they provide informed and adaptive care in an increasingly AI-influenced world.