Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Thursday, April 11, 2024
FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?
Sunday, October 29, 2023
We Can't Compete With AI Girlfriends
Friday, May 5, 2023
Is the world ready for ChatGPT therapists?
Sunday, March 12, 2023
Growth of AI in mental health raises fears of its ability to run wild
The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.
Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.
- Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
- The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.
What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.
- The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
- It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.
Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.
- KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
- Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.
Tuesday, April 14, 2020
New Data Rules Could Empower Patients but Undermine Their Privacy
The New York Times
Originally posted 9 March 20
Here is an excerpt:
The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.
Giving people access to their medical records via mobile apps is a major milestone for patient rights, even as it may heighten risks to patient privacy.
Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.
Although Americans have had the legal right to obtain a copy of their personal health information for two decades, many people face obstacles in getting that data from providers.
Some physicians still require patients to pick up computer disks — or even photocopies — of their records in person. Some medical centers use online portals that offer access to basic health data, like immunizations, but often do not include information like doctors’ consultation notes that might help patients better understand their conditions and track their progress.
The new rules are intended to shift that power imbalance toward the patient.
The info is here.
Thursday, January 23, 2020
Colleges want freshmen to use mental health apps. But are they risking students’ privacy?
The New York Times
Originally posted 2 Jan 20
Here are two excepts:
TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.
The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.
Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.
(cut)
“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”
Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.
“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.
The info is here.
Tuesday, January 7, 2020
Can Artificial Intelligence Increase Our Morality?
psychologytoday.com
Originally posted 9 Dec 19
Here is an excerpt:
For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.
The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?
The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.
The info is here.
Thursday, September 13, 2018
Meet the Chatbots Providing Mental Health Care
Wall Street Journal
Originally published Aug. 9, 2018
Here is an excerpt:
Wysa Ltd., a London- and Bangalore-based startup, is testing a free chatbot to teach adolescents emotional resilience, said co-founder Ramakant Vempati. In the app, a chubby penguin named Wysa helps users evaluate the sources of their stress and provides tips on how to stay positive, like thinking of a loved one or spending time outside. The company said its 400,000 users, most of whom are under 35, have had more than 20 million conversations with the bot.
Wysa is a wellness app, not a medical intervention, Vempati said, but it relies on cognitive behavioral therapy, mindfulness techniques and meditations that are “known to work in a self-help context.” If a user expresses thoughts of self-harm, Wysa reminds them that it’s just a bot and provides contact information for crisis hotlines. Alternatively, for $30 a month, users can access unlimited chat sessions with a human “coach.” Other therapy apps, such as Talkspace, offer similar low-cost services with licensed professionals.
Chatbots have potential, said Beth Jaworski, a mobile apps specialist at the National Center for PTSD in Menlo Park, Calif. But definitive research on whether they can help patients with more serious conditions, like major depression, still hasn’t been done, in part because the technology is so new, she said. Clinicians also worry about privacy. Mental health information is sensitive data; turning it over to companies could have unforeseen consequences.
The article is here.
Thursday, November 30, 2017
Artificial Intelligence & Mental Health
Chatbot News Daily
Originally posted
Here is an excerpt:
There are many barriers to getting quality mental healthcare, from searching for a provider who practices in a user's geographical location to screening multiple potential therapists in order to find someone you feel comfortable speaking with. The stigma associated with seeking mental health treatment often leaves people silently suffering from a psychological issue. These barriers stop many people from finding help and AI is being looked at a potential tool to bridge this gap between service providers and service users.
Imagine how many people would be benefitted if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection. A psychiatrist or psychologist examines a person’s tone, word choice, and the length of a phrase etc and these are all crucial cues to understanding what’s going on in someone’s mind. Machine learning is now being applied by researchers to diagnose people with mental disorders. Harvard University and University of Vermont researchers are working on integrating machine learning tools and Instagram to improve depression screening. Using color analysis, metadata, and algorithmic face detection, they were able to reach 70 percent accuracy in detecting signs of depression. The research wing at IBM is using transcripts and audio from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania, and depression. A research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Centre showed that machine learning is up to 93 percent accurate in identifying a suicidal person.
The post is here.
Saturday, April 9, 2016
Machines That Will Think and Feel
The Wall Street Journal
Originally published March 18, 2016
Here is an excerpt:
AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paper proposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake.
Suppose you had a fleet of AI software apps with IQs of 150 (and eventually 500 or 5,000) to help you manage life. You download them like other apps, and they spread out into your phones and computers—and walls, clothes, office, car, luggage—traveling within the dense computer network of the near future that is laid in by the yard, like thin cloth, everywhere.
AI apps will read your email and write responses, awaiting your nod to send them. They will escort your tax return to the IRS, monitor what is done and report back. They will murmur (from your collar, maybe) that the sidewalk is icier than it looks, a friend is approaching across the street, your gait is slightly odd—have you hurt your back?
The article is here.
Thursday, March 31, 2016
Things are looking app
Originally posted March 12, 2016
Here is an excerpt:
Constant, wireless-linked monitoring may spare patients much suffering, by spotting incipient signs of their condition deteriorating. It may also spare health providers and insurers many expensive hospital admissions. When Britain’s National Health Service tested the cost-effectiveness of remote support for patients with chronic obstructive pulmonary disease, it found that an electronic tablet paired with sensors measuring vital signs could result in better care and enormous savings, by enabling early intervention. Some m-health products may prove so effective that doctors begin to provide them on prescription.
So far, big drugmakers have been slow to join the m-health revolution, though there are some exceptions. HemMobile by Pfizer, and Beat Bleeds by Baxter, help patients to manage haemophilia. Bayer, the maker of Clarityn, an antihistamine drug, has a popular pollen-forecasting app. GSK, a drug firm with various asthma treatments, offers sufferers the MyAsthma app, to help them manage their condition.
The article is here.
Wednesday, October 9, 2013
F.D.A. to Regulate Some Health Apps
The New York Times
Published: September 23, 2013
The Food and Drug Administration said Monday that it would regulate only a small portion of the rapidly expanding universe of mobile health applications, software programs that run on smartphones and tablets and perform the same functions as medical devices.
Agency officials said their goal is to oversee apps that function like medical devices, performing ultrasounds, for example, and that could potentially pose risks to patients. Tens of thousands of health apps have sprung up in recent years, including apps that count steps or calories for fitness and weight loss, but agency officials said they would not regulate those types of apps.
The entire story is here.
Saturday, September 28, 2013
Girl’s Suicide Points to Rise in Apps Used by Cyberbullies
The New York Times
Published: September 13, 2013
Here is an excerpt:
In jumping, Rebecca became one of the youngest members of a growing list of children and teenagers apparently driven to suicide, at least in part, after being maligned, threatened and taunted online, mostly through a new collection of texting and photo-sharing cellphone applications. Her suicide raises new questions about the proliferation and popularity of these applications and Web sites among children and the ability of parents to keep up with their children’s online relationships.
For more than a year, Rebecca, pretty and smart, was cyberbullied by a coterie of 15 middle-school children who urged her to kill herself, her mother said.
The entire story is here.
Thursday, August 22, 2013
Health, fitness apps pose HIPAA risks for doctors
By SUE TER MAAT
amednews.com
Posted Aug. 5, 2013
Physicians might think twice about advising patients to use some mobile health and fitness apps. A July report indicates that many of those apps compromise patients’ privacy. Just recommending apps may put doctors at risk for violations of the Health Insurance Portability and Accountability Act.
“Even suggesting an app to patients — that’s a gray area,” said Marion Neal, owner of HIPAASimple.com, a HIPAA consulting firm for physicians in private practice. “Doctors should avoid recommending apps unless they are well-established to be secure.”
The entire story is here.