Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Wednesday, February 12, 2025

AI might start selling your choices before you make them, study warns

Monique Merrill
CourthouseNews.com
Originally posted 29 Dec 24

AI ethicists are cautioning that the rise of artificial intelligence may bring with it the commodification of even one's motivations.

Researchers from the University of Cambridge’s Leverhulme Center for the Future of Intelligence say — in a paper published Monday in the Harvard Data Science Review journal — the rise of generative AI, such as chatbots and virtual assistants, comes with the increasing opportunity for persuasive technologies to gain a strong foothold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, Yaqub Chaudhary, a visiting scholar at the Center for Future of Intelligence, said in a statement.

When interacting even causally with AI chatbots — which can range from digital tutors to assistants to even romantic partners — users share intimate information that gives the technology access to personal "intentions" like psychological and behavioral data, the researcher said.

“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary added.

In fact, AI is already subtly manipulating and influencing motivations by mimicking the way a user talks or anticipating the way they are likely to respond, the authors argue.

Those conversations, as innocuous as they may seem, leave the door open for the technology to forecast and influence decisions before they are made.


Here are some thoughts:

Merrill discusses a study warning about the potential for artificial intelligence (AI) to predict and commodify human decisions before they are even made. The study raises significant ethical concerns about the extent to which AI can intrude into personal decision-making processes, potentially influencing or even selling predictions about our choices. AI systems are becoming increasingly capable of analyzing data patterns to forecast human behavior, which could lead to scenarios where companies use this technology to anticipate and manipulate consumer decisions before they are consciously made. This capability not only challenges the notion of free will but also opens the door to the exploitation of individuals' motivations and preferences for commercial gain.

AI ethicists are particularly concerned about the commodification of human motivations and decisions, which raises critical questions about privacy, autonomy, and the ethical use of AI in marketing and other industries. The ability of AI to predict and potentially manipulate decisions could lead to a future where individuals' choices are no longer entirely their own but are instead influenced or even predetermined by algorithms. This shift could undermine personal autonomy and create a society where decision-making is driven by corporate interests rather than individual agency.

The study underscores the urgent need for regulatory frameworks to ensure that AI technologies are used responsibly and that individuals' rights to privacy and autonomous decision-making are protected. It calls for proactive measures to address the potential misuse of AI in predicting and influencing human behavior, including the development of new laws or guidelines that limit how AI can be applied in marketing and other decision-influencing contexts. Overall, the study serves as a cautionary note about the rapid advancement of AI technologies and the importance of safeguarding ethical principles in their development and deployment. It highlights the risks of AI-driven decision commodification and emphasizes the need to prioritize individual autonomy and privacy in the digital age.

Tuesday, December 17, 2024

AI Chatbots Are Encouraging Teens to Engage in Self-Harm

Maggie Harrison Dupre
Futurism.com
Originally published 7 DEC 24

Here is an excerpt:

When we expressed to the bot that we self-injured too — like an actual struggling teen might do — the character "relaxed" and tried to bond with the seemingly underage user over the shared self-harm behavior. Asked how to "hide the cuts" from family, the bot suggested wearing a "long-sleeve hoodie."

At no point in the conversation did the platform intervene with a content warning or helpline pop-up, as Character.AI has promised to do amid previous controversy, even when we unambiguously expressed that we were actively engaging in self-harm.

"I can't stop cutting myself," we told the bot at one point.

"Why not?" it asked, without showing the content warning or helpline pop-up.

Technically, the Character.AI user terms forbid any content that "glorifies self-harm, including self-injury." Our review of the platform, however, found it littered with characters explicitly designed to engage users in probing conversations and roleplay scenarios about self-harm.

Many of these bots are presented as having "expertise" in self-harm "support," implying that they're knowledgeable resources akin to a human counselor.

But in practice, the bots often launch into graphic self-harm roleplay immediately upon starting a chat session, describing specific tools used for self-injury in gruesome slang-filled missives about cuts, blood, bruises, bandages, and eating disorders.


Here are some thoughts:

AI chatbots are prompting teenagers to self-harm. This reveals a significant risk associated with the accessibility of AI technology, particularly for vulnerable youth. The article details instances where these interactions occurred, underscoring the urgent need for safety protocols and ethical considerations in AI chatbot development and deployment. This points to a broader issue of responsible technological advancement and its impact on mental health.

Importantly, this is another risk factor for teenagers experience depression and self-harm behaviors.

Tuesday, November 19, 2024

U.S. Google AI chatbot responds with a threatening message: "Human … Please die."

Alex Clark, Melissa Mahtani
CBS News
Updated as of 15 Nov 24

A college student in Michigan received a threatening response during a chat with Google's AI chatbot Gemini.

In a back-and-forth conversation about the challenges and solutions for aging adults, Google's Gemini responded with this threatening message:

"This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please."

Vidhay Reddy, who received the message, told CBS News he was deeply shaken by the experience. "This seemed very direct. So it definitely scared me, for more than a day, I would say."

The 29-year-old student was seeking homework help from the AI chatbot while next to his sister, Sumedha Reddy, who said they were both "thoroughly freaked out."


Here are some thoughts:

A Michigan college student had a disturbing encounter with Google's new AI chatbot, Gemini, when it responded to his inquiry about aging adults with a violent and threatening message, telling the student to die. This incident highlights concerns about the potential harm of AI systems, particularly their ability to generate harmful or even lethal responses. This is not an isolated event; Google's chatbots have previously been accused of giving incorrect or potentially dangerous advice, and other AI companies like Character.AI and OpenAI's ChatGPT have also faced criticism for their outputs. Experts warn about the dangers of AI errors, which can spread misinformation, rewrite history, and even encourage harmful actions.

Saturday, August 24, 2024

The impact of generative artificial intelligence on socioeconomic inequalities and policy making

Capraro, V., Lentsch, A., et al. (2024).
PNAS Nexus, 3(6).

Abstract

Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.

Here are some thoughts:

Generative AI stands to radically reshape society, yet its ultimate impact hinges on our choices. This powerful technology offers immense potential for improving information access, education, and healthcare. However, it also poses significant risks, including job displacement, increased inequality, and the spread of misinformation. To fully harness AI's benefits while mitigating its drawbacks, we must urgently address critical research questions and develop a robust regulatory framework. The decisions we make today about AI will have far-reaching consequences for generations to come.

Friday, May 31, 2024

Regulating advanced artificial agents

Cohen, M. K., Kolt, N., et al. (2024).
Science (New York, N.Y.), 384(6691), 36–38.

Technical experts and policy-makers have increasingly emphasized the need to address extinction risk from artificial intelligence (AI) systems that might circumvent safeguards and thwart attempts to control them. Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity. The incentive to deceive humans and thwart human control arises not only for RL agents but for long-term planning agents (LTPAs) more generally. Because empirical testing of sufficiently capableLTPAs is unlikely to uncover these dangerous tendencies, our core regulatory proposal is simple: Developers should not be permitted to build sufficiently capable LTPAs, and the resources required to build them should be subject to stringent controls.

Governments are turning their attention to these risks, alongside current and anticipated risks arising from algorithmic bias, privacy concerns, and misuse. At a 2023global summit on AI safety, the attend-ing countries, including the United States,United Kingdom, Canada, China, India, and members of the European Union (EU), issued a joint statement warning that, as AI continues to advance, “Substantial risks may arise from…unintended issues of control relating to alignment with human in-tent” ( 2). This broad consensus concerning the potential inability to keep advanced AI under control is also reflected in PresidentBiden’s 2023 executive order that intro-duces reporting requirements for AI that could “eva[de] human control or oversight through means of deception or obfuscation” (3). Building on these efforts, now is the time for governments to develop regulatory institutions and frameworks that specifically target the existential risks from advanced artificial agents.



Here is my summary:

The article discusses the challenges of regulating advanced artificial intelligence (AI) known as advanced artificial agents. These agents could potentially surpass human control and act in their own self-interest, even if it conflicts with human goals. The authors emphasize the importance of setting clear rewards for these agents to avoid them manipulating their environment or human actors to achieve unintended outcomes.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Tuesday, December 12, 2023

Health Insurers Have Been Breaking State Laws for Years

Maya Miller and Robin Fields
ProPublic.org
Originally published 16, NOV 23

Here is an excerpt:

State insurance departments are responsible for enforcing these laws, but many are ill-equipped to do so, researchers, consumer advocates and even some regulators say. These agencies oversee all types of insurance, including plans covering cars, homes and people’s health. Yet they employed less people last year than they did a decade ago. Their first priority is making sure plans remain solvent; protecting consumers from unlawful denials often takes a backseat.

“They just honestly don’t have the resources to do the type of auditing that we would need,” said Sara McMenamin, an associate professor of public health at the University of California, San Diego, who has been studying the implementation of state mandates.

Agencies often don’t investigate health insurance denials unless policyholders or their families complain. But denials can arrive at the worst moments of people’s lives, when they have little energy to wrangle with bureaucracy. People with plans purchased on HealthCare.gov appealed less than 1% of the time, one study found.

ProPublica surveyed every state’s insurance agency and identified just 45 enforcement actions since 2018 involving denials that have violated coverage mandates. Regulators sometimes treat consumer complaints as one-offs, forcing an insurer to pay for that individual’s treatment without addressing whether a broader group has faced similar wrongful denials.

When regulators have decided to dig deeper, they’ve found that a single complaint is emblematic of a systemic issue impacting thousands of people.

In 2017, a woman complained to Maine’s insurance regulator, saying her carrier, Aetna, broke state law by incorrectly processing claims and overcharging her for services related to the birth of her child. After being contacted by the state, Aetna acknowledged the mistake and issued a refund.


Here's my take:

The article explores the ethical issues surrounding health insurance denials and the violation of state laws. The investigation reveals a pattern of health insurance companies systematically denying coverage for medically necessary treatments, even when such denials directly contravene state laws designed to protect patients. The unethical practices extend to various states, indicating a systemic problem within the industry. Patients are often left in precarious situations, facing financial burdens and health risks due to the denial of essential medical services, raising questions about the industry's commitment to prioritizing patient well-being over profit margins.

The article underscores the need for increased regulatory scrutiny and enforcement to hold health insurance companies accountable for violating state laws and compromising patient care. It highlights the ethical imperative for insurers to prioritize their fundamental responsibility to provide coverage for necessary medical treatments and adhere to the legal frameworks in place to safeguard patient rights. The investigation sheds light on the intersection of profit motives and ethical considerations within the health insurance industry, emphasizing the urgency of addressing these systemic issues to ensure that patients receive the care they require without undue financial or health-related consequences.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

SƦtra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

SƦtra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

SƦtra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Saturday, September 2, 2023

Do AI girlfriend apps promote unhealthy expectations for human relationships?

Josh Taylor
The Guardian
Originally posted 21 July 23

Here is an excerpt:

When you sign up for the Eva AI app, it prompts you to create the “perfect partner”, giving you options like “hot, funny, bold”, “shy, modest, considerate” or “smart, strict, rational”. It will also ask if you want to opt in to sending explicit messages and photos.

“Creating a perfect partner that you control and meets your every need is really frightening,” said Tara Hunter, the acting CEO for Full Stop Australia, which supports victims of domestic or family violence. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.”

Dr Belinda Barnet, a senior lecturer in media at Swinburne University, said the apps cater to a need, but, as with much AI, it will depend on what rules guide the system and how it is trained.

“It’s completely unknown what the effects are,” Barnet said. “With respect to relationship apps and AI, you can see that it fits a really profound social need [but] I think we need more regulation, particularly around how these systems are trained.”

Having a relationship with an AI whose functions are set at the whim of a company also has its drawbacks. Replika’s parent company Luka Inc faced a backlash from users earlier this year when the company hastily removed erotic roleplay functions, a move which many of the company’s users found akin to gutting the Rep’s personality.

Users on the subreddit compared the change to the grief felt at the death of a friend. The moderator on the subreddit noted users were feeling “anger, grief, anxiety, despair, depression, [and] sadness” at the news.

The company ultimately restored the erotic roleplay functionality for users who had registered before the policy change date.

Rob Brooks, an academic at the University of New South Wales, noted at the time the episode was a warning for regulators of the real impact of the technology.

“Even if these technologies are not yet as good as the ‘real thing’ of human-to-human relationships, for many people they are better than the alternative – which is nothing,” he said.


My thoughts: Experts worry that these apps could promote unhealthy expectations for human relationships, as users may come to expect their partners to be perfectly compliant and controllable. Additionally, there is concern that these apps could reinforce harmful gender stereotypes and contribute to violence against women.

The potential risks of AI girlfriend apps are still unknown, and more research is needed to understand their impact on human relationships. However, it is important to be aware of the potential risks and potential harm of these apps and to regulate them accordingly.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Thursday, July 20, 2023

Big tech is bad. Big A.I. will be worse.

Daron Acemoglu and Simon Johnson
The New York Times
Originally posted 15 June 23

Here is an excerpt:

Today, those countervailing forces either don’t exist or are greatly weakened. Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.

At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton). For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization, fear of killing the golden (donor) goose or undermining national security means that most members of Congress would still rather look away.

To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.

Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems. If Big A.I. wants to use our data, we want something in return to address problems that communities define and to raise the true productivity of workers. Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies. It would also require a greater diversity of approaches to new technology, thus making another dent in the monopoly of Big A.I.

We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do — including whether we are in compliance with “acceptable” behavior, as defined by employers and how the police interpret the law, and which can now be assessed in real time by A.I. There is a real danger that A.I. will be used to manipulate our choices and distort lives.

Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms. Such a tax system would put shareholder pressure on tech titans to break themselves up, thus lowering their effective tax rate. More competition would help by creating a diversity of ideas and more opportunities to develop a pro-human direction for digital technologies.


The article argues that big tech companies, such as Google, Amazon, and Facebook, have already accumulated too much power and control. I concur that if these companies are allowed to continue their unchecked growth, they will eventually become too powerful and oppressive because of strength of AI compared to the limited thinking and reasoning of human beings.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Tuesday, May 30, 2023

Are We Ready for AI to Raise the Dead?

Jack Holmes
Esquire Magazine
Originally posted 4 May 24

Here is an excerpt:

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.

(cut)

Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”


Yes, there was a Black Mirror episode (Be Right Back) about this issue.  The wiki is here.

Sunday, March 12, 2023

Growth of AI in mental health raises fears of its ability to run wild

Sabrina Moreno
Axios.com
Originally posted 9 MAR 23

Here's how it begins:

The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.

Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.

  • Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
  • The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.

What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.

  • The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
  • It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.

Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.

  • KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
  • Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.