Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Wednesday, January 3, 2024
Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight
Tuesday, December 12, 2023
Health Insurers Have Been Breaking State Laws for Years
Saturday, November 18, 2023
Resolving the battle of short- vs. long-term AI risks
Tuesday, October 17, 2023
Tackling healthcare AI's bias, regulatory and inventorship challenges
- Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
- Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.
- To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
- To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
- To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
Saturday, September 2, 2023
Do AI girlfriend apps promote unhealthy expectations for human relationships?
Monday, July 24, 2023
How AI can distort human beliefs
Thursday, July 20, 2023
Big tech is bad. Big A.I. will be worse.
Monday, June 5, 2023
Why Conscious AI Is a Bad, Bad Idea
Tuesday, May 30, 2023
Are We Ready for AI to Raise the Dead?
Sunday, March 12, 2023
Growth of AI in mental health raises fears of its ability to run wild
The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.
Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.
- Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
- The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.
What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.
- The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
- It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.
Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.
- KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
- Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.