Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Speech Analysis. Show all posts
Showing posts with label Speech Analysis. Show all posts

Thursday, September 26, 2024

Decoding loneliness: Can explainable AI help in understanding language differences in lonely older adults?

Wang, N., et al. (2024).
Psychiatry research, 339, 116078.

Abstract

Study objectives
Loneliness impacts the health of many older adults, yet effective and targeted interventions are lacking. Compared to surveys, speech data can capture the personalized experience of loneliness. In this proof-of-concept study, we used Natural Language Processing to extract novel linguistic features and AI approaches to identify linguistic features that distinguish lonely adults from non-lonely adults.

Methods
Participants completed UCLA loneliness scales and semi-structured interviews (sections: social relationships, loneliness, successful aging, meaning/purpose in life, wisdom, technology and successful aging). We used the Linguistic Inquiry and Word Count (LIWC-22) program to analyze linguistic features and built a classifier to predict loneliness. Each interview section was analyzed using an explainable AI (XAI) model to classify loneliness.

Results
The sample included 97 older adults (age 66–101 years, 65 % women). The model had high accuracy (Accuracy: 0.889, AUC: 0.8), precision (F1: 0.8), and recall (1.0). The sections on social relationships and loneliness were most important for classifying loneliness. Social themes, conversational fillers, and pronoun usage were important features for classifying loneliness.

Conclusions
XAI approaches can be used to detect loneliness through the analyses of unstructured speech and to better understand the experience of loneliness.
------------

Here are some thoughts.  AI has the potential to be helpful for mental health professionals.

Scientists have made a groundbreaking discovery in detecting loneliness through artificial intelligence (AI). A recent study published reveals that AI can identify loneliness by analyzing unstructured speech patterns. This innovative approach offers a promising solution for addressing loneliness, particularly among older adults.

The analysis showed that lonely individuals frequently referenced social status, religion, and expressed more negative emotions. In contrast, non-lonely individuals focused on social connections, family, and lifestyle. Additionally, lonely individuals used more first-person singular pronouns, indicating a self-focused perspective, whereas non-lonely individuals used more first-person plural pronouns, suggesting a sense of inclusion and connection.

Furthermore, the study found that conversational fillers, non-fluencies, and internet slang were more prevalent in the speech of lonely individuals. Lonely individuals also used more causation conjunctions, indicating a tendency to provide detailed explanations of their experiences. These findings suggest that the way people communicate may reflect their feelings about social relationships.

The AI model offers a scalable and less intrusive method for assessing loneliness, which can significantly impact mental and physical health, particularly in older adults. While the study has limitations, including a relatively small sample size, the researchers aim to expand their work to more diverse populations and explore how to better assess loneliness.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.