Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Apps. Show all posts
Showing posts with label Apps. Show all posts

Thursday, April 11, 2024

FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?

Frank Vinluan
Originally posted 1 April 24

A software app that modifies behavior through a series of lessons and exercises has received FDA clearance for treating patients with major depressive disorder, making it the first prescription digital therapeutic for this indication.

The product, known as CT-152 during its development by partners Otsuka Pharmaceutical and Click Therapeutics, will be commercialized under the brand name Rejoyn.

Rejoyn is an alternative way to offer cognitive behavioral therapy, a type of talk therapy in which a patient works with a clinician in a series of in-person sessions. In Rejoyn, the cognitive behavioral therapy lessons, exercises, and reminders are digitized. The treatment is intended for use three times weekly for six weeks, though lessons may be revisited for an additional four weeks. The app was initially developed by Click Therapeutics, a startup that develops apps that use exercises and tasks to retrain and rewire the brain. In 2019, Otsuka and Click announced a collaboration in which the Japanese pharma company would fully fund development of the depression app.

Here is a quick summary:

Rejoyn is the first prescription digital therapeutic (PDT) authorized by the FDA for the adjunctive treatment of major depressive disorder (MDD) symptoms in adults. 

Rejoyn is a 6-week remote treatment program that combines clinically-validated cognitive emotional training exercises and brief therapeutic lessons to help enhance cognitive control of emotions. The app aims to improve connections in the brain regions affected by depression, allowing the areas responsible for processing and regulating emotions to work better together and reduce MDD symptoms. 

The FDA clearance for Rejoyn was based on data from a 13-week pivotal clinical trial that compared the app to a sham control app in 386 participants aged 22-64 with MDD who were taking antidepressants. The study found that Rejoyn users showed a statistically significant improvement in depression symptom severity compared to the control group, as measured by clinician-reported and patient-reported scales. No adverse effects were observed during the trial. 

Rejoyn is expected to be available for download on iOS and Android devices in the second half of 2024. It represents a novel, clinically-validated digital therapeutic option that can be used as an adjunct to traditional MDD treatments under the guidance of healthcare providers.

Sunday, October 29, 2023

We Can't Compete With AI Girlfriends

Freya India
Originally published 14 September 23

Here isn an excerpt:

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

Now add to all that a world of fully customisable AI girlfriends, each with flawless avatar faces and cartoonish body proportions. Eva AI’s Dream Girl Builder, for example, allows users to personalise every feature of their virtual girlfriend, from face style to butt size. Which could clearly be unhealthy for men who already have warped expectations. But it’s also unhealthy for a generation of girls already hating how they look, suffering with facial and body dysmorphia, and seeking cosmetic surgery in record numbers. Already many girls feel as if they are in constant competition with hyper-sexualised Instagram influencers and infinitely accessible porn stars. Now the next generation will grow up not just with all that but knowing the boys they like can build and sext their ideal woman, and feeling as if they must constantly modify themselves to compete. I find that tragic.


The article discusses the growing trend of AI girlfriends and the potential dangers associated with their proliferation. It mentions that various startups are creating romantic chatbots capable of explicit conversations and sexual content, with millions of users downloading such apps. While much of the concern focuses on the impact on men, the article also highlights the negative consequences this trend may have on women, particularly in terms of unrealistic beauty standards and emotional expectations. The author expresses concerns about young girls feeling pressured to compete with AI girlfriends and the potential harm to self-esteem and body image. The article raises questions about the impact of AI girlfriends on real relationships and emotional intimacy, particularly among younger generations. It concludes with a glimmer of hope that people may eventually reject the artificial in favor of authentic human interactions.

The article raises valid concerns about the proliferation of AI girlfriends and their potential societal impacts. It is indeed troubling to think about the unrealistic beauty and emotional standards that these apps may reinforce, especially among young girls and women. The pressure to conform to these virtual ideals can undoubtedly have damaging effects on self-esteem and mental well-being.

The article also highlights concerns about the potential substitution of real emotional intimacy with AI companions, particularly among a generation that is already grappling with social anxieties and less real-world human interaction. This raises important questions about the long-term consequences of such technologies on relationships and societal dynamics.

However, the article's glimmer of optimism suggests that people may eventually realize the value of authentic, imperfect human interactions. This point is essential, as it underscores the potential for a societal shift away from excessive reliance on AI and towards more genuine connections.

In conclusion, while AI girlfriends may offer convenience and instant gratification, they also pose significant risks to societal norms and emotional well-being. It is crucial for individuals and society as a whole to remain mindful of these potential consequences and prioritize real human connections and authenticity.

Friday, May 5, 2023

Is the world ready for ChatGPT therapists?

Ian Graber-Stiehl
Originally posted 3 May 23

Since 2015, Koko, a mobile mental-health app, has tried to provide crowdsourced support for people in need. Text the app to say that you’re feeling guilty about a work issue, and an empathetic response will come through in a few minutes — clumsy perhaps, but unmistakably human — to suggest some positive coping strategies.

The app might also invite you to respond to another person’s plight while you wait. To help with this task, an assistant called Kokobot can suggest some basic starters, such as “I’ve been there”.

But last October, some Koko app users were given the option to receive much-more-complete suggestions from Kokobot. These suggestions were preceded by a disclaimer, says Koko co-founder Rob Morris, who is based in Monterey, California: “I’m just a robot, but here’s an idea of how I might respond.” Users were able to edit or tailor the response in any way they felt was appropriate before they sent it.

What they didn’t know at the time was that the replies were written by GPT-3, the powerful artificial-intelligence (AI) tool that can process and produce natural text, thanks to a massive written-word training set. When Morris eventually tweeted about the experiment, he was surprised by the criticism he received. “I had no idea I would create such a fervour of discussion,” he says.


Automated therapist

Koko is far from the first platform to implement AI in a mental-health setting. Broadly, machine-learning-based AI has been implemented or investigated in the mental-health space in three roles.

The first has been the use of AI to analyse therapeutic interventions, to fine-tune them down the line. Two high-profile examples, ieso and Lyssn, train their natural-language-processing AI on therapy-session transcripts. Lyssn, a program developed by scientists at the University of Washington in Seattle, analyses dialogue against 55 metrics, from providers’ expressions of empathy to the employment of CBT interventions. ieso, a provider of text-based therapy based in Cambridge, UK, has analysed more than half a million therapy sessions, tracking the outcomes to determine the most effective interventions. Both essentially give digital therapists notes on how they’ve done, but each service aims to provide a real-time tool eventually: part advising assistant, part grading supervisor.

The second role for AI has been in diagnosis. A number of platforms, such as the REACH VET program for US military veterans, scan a person’s medical records for red flags that might indicate issues such as self-harm or suicidal ideation. This diagnostic work, says Torous, is probably the most immediately promising application of AI in mental health, although he notes that most of the nascent platforms require much more evaluation. Some have struggled. Earlier this year, MindStrong, a nearly decade-old app that initially aimed to leverage AI to identify early markers of depression, collapsed despite early investor excitement and a high-profile scientist co-founder, Tom Insel, the former director of the US National Institute of Mental Health.

Sunday, March 12, 2023

Growth of AI in mental health raises fears of its ability to run wild

Sabrina Moreno
Originally posted 9 MAR 23

Here's how it begins:

The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.

Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.

  • Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
  • The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.

What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.

  • The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
  • It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.

Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.

  • KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
  • Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.

Tuesday, April 14, 2020

New Data Rules Could Empower Patients but Undermine Their Privacy

Natasha Singer
The New York Times
Originally posted 9 March 20

Here is an excerpt:

The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.

Giving people access to their medical records via mobile apps is a major milestone for patient rights, even as it may heighten risks to patient privacy.

Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.

Although Americans have had the legal right to obtain a copy of their personal health information for two decades, many people face obstacles in getting that data from providers.

Some physicians still require patients to pick up computer disks — or even photocopies — of their records in person. Some medical centers use online portals that offer access to basic health data, like immunizations, but often do not include information like doctors’ consultation notes that might help patients better understand their conditions and track their progress.

The new rules are intended to shift that power imbalance toward the patient.

The info is here.

Thursday, January 23, 2020

Colleges want freshmen to use mental health apps. But are they risking students’ privacy?

 (iStock)Deanna Paul
The New York Times
Originally posted 2 Jan 20

Here are two excepts:

TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.

The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.

Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.


“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”

Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.

“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.

The info is here.

Tuesday, January 7, 2020

Can Artificial Intelligence Increase Our Morality?

Matthew Hutson
Originally posted 9 Dec 19

Here is an excerpt:

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

The info is here.

Thursday, September 13, 2018

Meet the Chatbots Providing Mental Health Care

Daniela Hernandez
Wall Street Journal
Originally published Aug. 9, 2018

Here is an excerpt:

Wysa Ltd., a London- and Bangalore-based startup, is testing a free chatbot to teach adolescents emotional resilience, said co-founder Ramakant Vempati.  In the app, a chubby penguin named Wysa helps users evaluate the sources of their stress and provides tips on how to stay positive, like thinking of a loved one or spending time outside.  The company said its 400,000 users, most of whom are under 35, have had more than 20 million conversations with the bot.

Wysa is a wellness app, not a medical intervention, Vempati said, but it relies on cognitive behavioral therapy, mindfulness techniques and meditations that are “known to work in a self-help context.”  If a user expresses thoughts of self-harm, Wysa reminds them that it’s just a bot and provides contact information for crisis hotlines.  Alternatively, for $30 a month, users can access unlimited chat sessions with a human “coach.”  Other therapy apps, such as Talkspace, offer similar low-cost services with licensed professionals.

Chatbots have potential, said Beth Jaworski, a mobile apps specialist at the National Center for PTSD in Menlo Park, Calif.  But definitive research on whether they can help patients with more serious conditions, like major depression, still hasn’t been done, in part because the technology is so new, she said.  Clinicians also worry about privacy.  Mental health information is sensitive data; turning it over to companies could have unforeseen consequences.

The article is here.

Thursday, November 30, 2017

Artificial Intelligence & Mental Health

Smriti Joshi
Chatbot News Daily
Originally posted

Here is an excerpt:

There are many barriers to getting quality mental healthcare, from searching for a provider who practices in a user's geographical location to screening multiple potential therapists in order to find someone you feel comfortable speaking with. The stigma associated with seeking mental health treatment often leaves people silently suffering from a psychological issue. These barriers stop many people from finding help and AI is being looked at a potential tool to bridge this gap between service providers and service users.

Imagine how many people would be benefitted if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection. A psychiatrist or psychologist examines a person’s tone, word choice, and the length of a phrase etc and these are all crucial cues to understanding what’s going on in someone’s mind. Machine learning is now being applied by researchers to diagnose people with mental disorders. Harvard University and University of Vermont researchers are working on integrating machine learning tools and Instagram to improve depression screening. Using color analysis, metadata, and algorithmic face detection, they were able to reach 70 percent accuracy in detecting signs of depression. The research wing at IBM is using transcripts and audio from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania, and depression. A research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Centre showed that machine learning is up to 93 percent accurate in identifying a suicidal person.

The post is here.

Saturday, April 9, 2016

Machines That Will Think and Feel

By David Gelernter
The Wall Street Journal
Originally published March 18, 2016

Here is an excerpt:

AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paper proposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake.

Suppose you had a fleet of AI software apps with IQs of 150 (and eventually 500 or 5,000) to help you manage life. You download them like other apps, and they spread out into your phones and computers—and walls, clothes, office, car, luggage—traveling within the dense computer network of the near future that is laid in by the yard, like thin cloth, everywhere.

AI apps will read your email and write responses, awaiting your nod to send them. They will escort your tax return to the IRS, monitor what is done and report back. They will murmur (from your collar, maybe) that the sidewalk is icier than it looks, a friend is approaching across the street, your gait is slightly odd—have you hurt your back?

The article  is here.

Thursday, March 31, 2016

Things are looking app

The Economist
Originally posted March 12, 2016

Here is an excerpt:

Constant, wireless-linked monitoring may spare patients much suffering, by spotting incipient signs of their condition deteriorating. It may also spare health providers and insurers many expensive hospital admissions. When Britain’s National Health Service tested the cost-effectiveness of remote support for patients with chronic obstructive pulmonary disease, it found that an electronic tablet paired with sensors measuring vital signs could result in better care and enormous savings, by enabling early intervention. Some m-health products may prove so effective that doctors begin to provide them on prescription.

So far, big drugmakers have been slow to join the m-health revolution, though there are some exceptions. HemMobile by Pfizer, and Beat Bleeds by Baxter, help patients to manage haemophilia. Bayer, the maker of Clarityn, an antihistamine drug, has a popular pollen-forecasting app. GSK, a drug firm with various asthma treatments, offers sufferers the MyAsthma app, to help them manage their condition.

The article is here.

Wednesday, October 9, 2013

F.D.A. to Regulate Some Health Apps

The New York Times
Published: September 23, 2013

The Food and Drug Administration said Monday that it would regulate only a small portion of the rapidly expanding universe of mobile health applications, software programs that run on smartphones and tablets and perform the same functions as medical devices.

Agency officials said their goal is to oversee apps that function like medical devices, performing ultrasounds, for example, and that could potentially pose risks to patients. Tens of thousands of health apps have sprung up in recent years, including apps that count steps or calories for fitness and weight loss, but agency officials said they would not regulate those types of apps.

The entire story is here.

Saturday, September 28, 2013

Girl’s Suicide Points to Rise in Apps Used by Cyberbullies

The New York Times
Published: September 13, 2013

Here is an excerpt:

In jumping, Rebecca became one of the youngest members of a growing list of children and teenagers apparently driven to suicide, at least in part, after being maligned, threatened and taunted online, mostly through a new collection of texting and photo-sharing cellphone applications. Her suicide raises new questions about the proliferation and popularity of these applications and Web sites among children and the ability of parents to keep up with their children’s online relationships.

For more than a year, Rebecca, pretty and smart, was cyberbullied by a coterie of 15 middle-school children who urged her to kill herself, her mother said.

The entire story is here.

Thursday, August 22, 2013

Health, fitness apps pose HIPAA risks for doctors

Physicians should check apps’ privacy protections before suggesting them to patients. A new report says most apps — especially free ones — don’t offer much privacy.

Posted Aug. 5, 2013

Physicians might think twice about advising patients to use some mobile health and fitness apps. A July report indicates that many of those apps compromise patients’ privacy. Just recommending apps may put doctors at risk for violations of the Health Insurance Portability and Accountability Act.

“Even suggesting an app to patients — that’s a gray area,” said Marion Neal, owner of HIPAASimple.com, a HIPAA consulting firm for physicians in private practice. “Doctors should avoid recommending apps unless they are well-established to be secure.”

The entire story is here.