Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, December 29, 2024

Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety

Botha, N. N., et al. (2024).
Archives of Public Health, 82(1).

Abstract

Background
The global health system remains determined to leverage on every workable opportunity, including artificial intelligence (AI) to provide care that is consistent with patients’ needs. Unfortunately, while AI models generally return high accuracy within the trials in which they are trained, their ability to predict and recommend the best course of care for prospective patients is left to chance.

Purpose
This review maps evidence between January 1, 2010 to December 31, 2023, on the perceived threats posed by the usage of AI tools in healthcare on patients’ rights and safety.

Methods
We deployed the guidelines of Tricco et al. to conduct a comprehensive search of current literature from Nature, PubMed, Scopus, ScienceDirect, Dimensions AI, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. In all, 80 peer reviewed articles qualified and were included in this study.

Results
We report that there is a real chance of unpredictable errors, inadequate policy and regulatory regime in the use of AI technologies in healthcare. Moreover, medical paternalism, increased healthcare cost and disparities in insurance coverage, data security and privacy concerns, and bias and discriminatory services are imminent in the use of AI tools in healthcare.

Conclusions
Our findings have some critical implications for achieving the Sustainable Development Goals (SDGs) 3.8, 11.7, and 16. We recommend that national governments should lead in the roll-out of AI tools in their healthcare systems. Also, other key actors in the healthcare industry should contribute to developing policies on the use of AI in healthcare systems.

Here are some thoughts:

This article presents a comprehensive scoping review that examines the perceived threats posed by artificial intelligence (AI) in healthcare concerning patient rights and safety. This review analyzes literature from January 1, 2010, to December 31, 2023, identifying 80 peer-reviewed articles that highlight various concerns associated with AI tools in medical settings.

The review underscores that while AI has the potential to enhance healthcare delivery, it also introduces significant risks. These include unpredictable errors in AI systems, inadequate regulatory frameworks governing AI applications, and the potential for medical paternalism that may diminish patient autonomy. Additionally, the findings indicate that AI could lead to increased healthcare costs and disparities in insurance coverage, alongside serious concerns regarding data security and privacy breaches. The risk of bias and discrimination in AI services is also highlighted, raising alarms about the fairness of care delivered through these technologies.

The authors argue that these challenges have critical implications for achieving Sustainable Development Goals (SDGs) related to universal health coverage and equitable access to healthcare services. They recommend that national governments take the lead in integrating AI tools into healthcare systems while encouraging other stakeholders to contribute to policy development regarding AI usage.

Furthermore, the review emphasizes the need for rigorous scrutiny of AI tools before their deployment, advocating for enhanced machine learning protocols to ensure patient safety. It calls for a more active role for patients in their care processes and suggests that healthcare managers conduct thorough evaluations of AI technologies before implementation. This scoping review aims to inform future research directions and policy formulations that prioritize patient rights and safety in the evolving landscape of AI in healthcare.