Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Healthcare. Show all posts
Showing posts with label Healthcare. Show all posts

Thursday, April 17, 2025

How do clinical psychologists make ethical decisions? A systematic review of empirical research

Grace, B., Wainwright, T., et al. (2020). 
Clinical Ethics, 15(4), 213–224.

Abstract

Given the nature of the discipline, it might be assumed that clinical psychology is an ethical profession, within which effective ethical decision-making is integral. How then, does this ethical decision-making occur? This paper describes a systematic review of empirical research addressing this question. The paucity of evidence related to this question meant that the scope was broadened to include other professions who deliver talking therapies. This review could support reflective practice about what may be taken into account when making ethical decisions and highlight areas for future research. Using academic search databases, original research articles were identified from peer-reviewed journals. Articles using qualitative (n = 3), quantitative (n = 8) and mixed methods (n = 2) were included. Two theoretical models of aspects of ethical decision-making were identified. Areas of agreement and debate are described in relation to factors linked to the professional, which impacted ethical decision-making. Factors relating to ethical dilemmas, which impacted ethical decision-making, are discussed. Articles were appraised by two independent raters, using quality assessment criteria, which suggested areas of methodological strengths and weaknesses. Comparison and synthesis of results revealed that the research did not generally pertain to current clinical practice of talking therapies or the particular socio-political context of the UK healthcare system. There was limited research into ethical decision-making amongst specific professions, including clinical psychology. Generalisability was limited due to methodological issues, indicating avenues for future research.

Here are some thoughts:

This article is a systematic review of empirical research on how clinical psychologists and related professionals make ethical decisions. The review addresses the question of how professionals who deliver psychotherapy make ethical decisions related to their work. The authors searched academic databases for original research articles from peer-reviewed journals and included qualitative, quantitative, and mixed-methods studies. The review identified two theoretical models of ethical decision-making and discussed factors related to the professional and ethical dilemmas that impact decision-making. The authors found that the research did not generally pertain to current clinical practice or the socio-political context of the UK healthcare system and that there was limited research into ethical decision-making among specific professions, including clinical psychology. The authors suggest that there is a need for further up-to-date, profession-specific, mixed-methods research in this area.

Thursday, April 3, 2025

Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals

Choudhury, A., & Chaudhry, Z. (2024).
Journal of medical Internet research, 26, e56764.

Abstract

As the health care industry increasingly embraces large language models (LLMs), understanding the consequence of this integration becomes crucial for maximizing benefits while mitigating potential pitfalls. This paper explores the evolving relationship among clinician trust in LLMs, the transition of data sources from predominantly human-generated to artificial intelligence (AI)–generated content, and the subsequent impact on the performance of LLMs and clinician competence. One of the primary concerns identified in this paper is the LLMs’ self-referential learning loops, where AI-generated content feeds into the learning algorithms, threatening the diversity of the data pool, potentially entrenching biases, and reducing the efficacy of LLMs. While theoretical at this stage, this feedback loop poses a significant challenge as the integration of LLMs in health care deepens, emphasizing the need for proactive dialogue and strategic measures to ensure the safe and effective use of LLM technology. Another key takeaway from our investigation is the role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs. The paper highlights how expert users, particularly clinicians, can leverage LLMs to enhance productivity by off-loading routine tasks while maintaining a critical oversight to identify and correct potential inaccuracies in AI-generated content. This balance of trust and skepticism is vital for ensuring that LLMs augment rather than undermine the quality of patient care. We also discuss the risks associated with the deskilling of health care professionals. Frequent reliance on LLMs for critical tasks could result in a decline in health care providers’ diagnostic and thinking skills, particularly affecting the training and development of future professionals. The legal and ethical considerations surrounding the deployment of LLMs in health care are also examined. We discuss the medicolegal challenges, including liability in cases of erroneous diagnoses or treatment advice generated by LLMs. The paper references recent legislative efforts, such as The Algorithmic Accountability Act of 2023, as crucial steps toward establishing a framework for the ethical and responsible use of AI-based technologies in health care. In conclusion, this paper advocates for a strategic approach to integrating LLMs into health care. By emphasizing the importance of maintaining clinician expertise, fostering critical engagement with LLM outputs, and navigating the legal and ethical landscape, we can ensure that LLMs serve as valuable tools in enhancing patient care and supporting health care professionals. This approach addresses the immediate challenges posed by integrating LLMs and sets a foundation for their maintainable and responsible use in the future.

The abstract provides a sufficient summary.

Thursday, March 27, 2025

How Moral Case Deliberation Supports Good Clinical Decision Making

Inguaggiato, G., et al. (2019).
The AMA Journal of Ethic, 21(10),
E913-919.

Abstract

In clinical decision making, facts are presented and discussed, preferably in the context of both evidence-based medicine and patients’ values. Because clinicians’ values also have a role in determining the best courses of action, we argue that reflecting on both patients’ and professionals’ values fosters good clinical decision making, particularly in situations of moral uncertainty. Moral case deliberation, a form of clinical ethics support, can help elucidate stakeholders’ values and how they influence interpretation of facts. This article demonstrates how this approach can help clarify values and contribute to good clinical decision making through a case example.

Here are some thoughts:

This article discusses how moral case deliberation (MCD) supports good clinical decision-making. It argues that while evidence-based medicine and patient values are crucial, clinicians' values also play a significant role, especially in morally uncertain situations. MCD, a form of clinical ethics support, helps clarify the values of all stakeholders and how these values influence the interpretation of facts. The article explains how MCD differs from shared decision-making, emphasizing its focus on ethical dilemmas and understanding moral uncertainty among caregivers rather than reaching a shared decision with the patient. Through dialogue and a structured approach, MCD facilitates a deeper understanding of the situation, leading to better-informed and morally sensitive clinical decisions. The article uses a case study from a neonatal intensive care unit to illustrate how MCD can help resolve disagreements and uncertainties by exploring the different values held by nurses and physicians.

Friday, February 21, 2025

Evaluating trends in private equity ownership and impacts on health outcomes, costs, and quality: systematic review

Borsa, A., Bejarano, G., Ellen, M., & Bruch, J. D. 

Abstract

Objective
To review the evidence on trends and impacts of private equity (PE) ownership of healthcare operators.

Data synthesis 
Studies were classified as finding either beneficial, harmful, mixed, or neutral impacts of PE ownership on main outcome measures. Results across studies were narratively synthesized and reported. Risk of bias was evaluated using ROBINS-I (Risk Of Bias In Non-randomised Studies of Interventions).

Results
The electronic search identified 1778 studies, with 55 meeting the inclusion criteria. Studies spanned eight countries, with most (n=47) analyzing PE ownership of healthcare operators in the US. Nursing homes were the most commonly studied healthcare setting (n=17), followed by hospitals and dermatology settings (n=9 each); ophthalmology (n=7); multiple specialties or general physician groups (n=5); urology (n=4); gastroenterology and orthopedics (n=3 each); surgical centers, fertility, and obstetrics and gynecology (n=2 each); and anesthesia, hospice care, oral or maxillofacial surgery, otolaryngology, and plastics (n=1 each). Across the outcome measures, PE ownership was most consistently associated with increases in costs to patients or payers. Additionally, PE ownership was associated with mixed to harmful impacts on quality. These outcomes held in sensitivity analyses in which only studies with moderate risk of bias were included. Health outcomes showed both beneficial and harmful results, as did costs to operators, but the volume of studies for these outcomes was too low for conclusive interpretation. In some instances, PE ownership was associated with reduced nurse staffing levels or a shift towards lower nursing skill mix. No consistently beneficial impacts of PE ownership were identified.

Conclusions
Trends in PE ownership rapidly increased across almost all healthcare settings studied. Such ownership is often associated with harmful impacts on costs to patients or payers and mixed to harmful impacts on quality. Owing to risk of bias and frequent geographic focus on the US, conclusions might not be generalizable internationally.

Here are some thoughts:

This systematic review examines the increasing trends and impacts of private equity (PE) ownership in healthcare across eight countries, primarily focusing on the US. Analyzing 55 empirical studies, the review assessed PE's influence on health outcomes, costs to patients/payers and operators, and quality of care in settings like nursing homes, hospitals, and dermatology practices. The findings reveal a rapid increase in PE ownership across various healthcare settings, with PE ownership most consistently associated with increased costs to patients or payers and mixed to harmful impacts on quality. While health outcomes and operator costs showed mixed results due to a limited number of studies, some instances linked PE ownership to reduced nurse staffing levels or a shift toward lower nursing skill mix. The review identified no consistently beneficial impacts of PE ownership, leading the authors to conclude that such ownership is often associated with harmful impacts on costs and mixed to harmful impacts on quality. However, they caution that these conclusions might not be generalizable internationally due to the risk of bias in the included studies and the geographic focus on the US, highlighting the need for increased attention and possibly increased regulation.

Friday, February 14, 2025

High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare

Corfmat, M., Martineau, J. T., & Régis, C. (2025).
BMC Med Ethics 26, 4
https://doi.org/10.1186/s12910-024-01158-1

Abstract

Background
Considering the disruptive potential of AI technology, its current and future impact in healthcare, as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help healthcare professionals better navigate the AI wave.

Methods
We analyzed the literature that specifically discusses ethics and law related to the development and implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.

Results
We identified six categories of issues related to AI development and implementation in healthcare: (1) privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work, professions and the job market. While each one raises different questions depending on perspective, we propose three main legal and ethical priorities: education and training of healthcare professionals, offering support and guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of the AI tools themselves.

Conclusions
By highlighting the main ethical and legal issues involved in the development and implementation of AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and trusted AI will be jeopardized.


Here are some thoughts:

This article explores the ethical and legal challenges surrounding artificial intelligence (AI) in healthcare. The authors identify six critical categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, as well as work and professional impacts.

The research highlights that AI is fundamentally different from previous medical technologies due to its disruptive potential and ability to perform autonomous learning and decision-making. While AI promises significant improvements in areas like biomedical research, precision medicine, and healthcare efficiency, there remains a significant gap between AI system development and practical implementation in healthcare settings.

The authors emphasize that healthcare professionals often lack comprehensive knowledge about AI technologies and their implications. They argue that understanding the nuanced differences between legal and ethical frameworks is crucial for responsible AI integration. Legal rules represent minimal mandatory requirements, while ethical considerations encourage deeper reflection on appropriate behaviors and choices.

The paper suggests three primary priorities for addressing AI's ethical and legal challenges: (1) educating and training healthcare professionals, (2) providing robust support and guidance during AI system use, and (3) integrating ethical and legal considerations directly into AI tool development. Ultimately, the researchers stress the importance of close collaboration between healthcare professionals, ethicists, and legal experts to develop reliable and trustworthy AI technologies.

Thursday, February 13, 2025

New Proposed Health Cybersecurity Rule: What Physicians Should Know

Alicia Ault
MedScape.com
Originally posted 10 Jan 25

A new federal rule could force hospitals and doctors’ groups to boost health cybersecurity measures to better protect patients’ health information and prevent ransomware attacks. Some of the proposed requirements could be expensive for healthcare providers.

The proposed rule, issued by the US Department of Health and Human Services (HHS) and published on January 6 in the Federal Register, marks the first time in a decade that the federal government has updated regulations governing the security of private health information (PHI) that’s kept or shared online. Comments on the rule are due on March 6.

Because the risks for cyberattacks have increased exponentially, “there is a greater need to invest than ever before in both people and technologies to secure patient information,” Adam Greene, an attorney at Davis Wright Tremaine in Washington, DC, who advises healthcare clients on cybersecurity, told Medscape Medical News.

Bad actors continue to evolve and are often far ahead of their targets, added Mark Fox, privacy and research compliance officer for the American College of Cardiology.

In the proposed rule, HHS noted that breaches have risen by more than 50% since 2020. Damages from health data breaches are more expensive than in any other sector, averaging $10 million per incident, said HHS.


Here are some thoughts:

The article outlines a newly proposed cybersecurity rule aimed at strengthening the protection of healthcare data and systems. This rule is particularly relevant to physicians and healthcare organizations, as it addresses the growing threat of cyberattacks in the healthcare sector. The proposed regulation emphasizes the need for enhanced cybersecurity measures, such as implementing stronger protocols, conducting regular risk assessments, and ensuring compliance with updated standards. For physicians, this means adapting to new requirements that may require additional resources, training, and investment in cybersecurity infrastructure. The rule also highlights the critical importance of safeguarding patient information, as breaches can lead to severe consequences, including identity theft, financial loss, and compromised patient care. Beyond data protection, the rule aims to prevent disruptions to healthcare operations, such as delayed treatments or system shutdowns, which can arise from cyber incidents.

However, while the rule is a necessary step to address vulnerabilities, it may pose challenges for smaller practices or resource-limited healthcare organizations. Compliance could require significant financial and operational adjustments, potentially creating a burden for some providers. Despite these challenges, the proposed rule reflects a broader trend toward stricter cybersecurity regulations across industries, particularly in sectors like healthcare that handle highly sensitive information. It underscores the need for proactive measures to address evolving cyber threats and ensure the long-term security and reliability of healthcare systems. Collaboration between healthcare organizations, cybersecurity experts, and regulatory bodies will be essential to successfully implement these measures and share best practices. Ultimately, while the transition may be demanding, the long-term benefits—such as reduced risk of data breaches, enhanced patient trust, and uninterrupted healthcare services—are likely to outweigh the initial costs.

Friday, February 7, 2025

Physicians’ ethical concerns about artificial intelligence in medicine: a qualitative study: “The final decision should rest with a human”

Kahraman, F.,  et al. (2024).
Frontiers in Public Health, 12.

Abstract

Background/aim: Artificial Intelligence (AI) is the capability of computational systems to perform tasks that require human-like cognitive functions, such as reasoning, learning, and decision-making. Unlike human intelligence, AI does not involve sentience or consciousness but focuses on data processing, pattern recognition, and prediction through algorithms and learned experiences. In healthcare including neuroscience, AI is valuable for improving prevention, diagnosis, prognosis, and surveillance.

Methods: This qualitative study aimed to investigate the acceptability of AI in Medicine (AIIM) and to elucidate any technical and scientific, as well as social and ethical issues involved. Twenty-five doctors from various specialties were carefully interviewed regarding their views, experience, knowledge, and attitude toward AI in healthcare.

Results: Content analysis confirmed the key ethical principles involved: confidentiality, beneficence, and non-maleficence. Honesty was the least invoked principle. A thematic analysis established four salient topic areas, i.e., advantages, risks, restrictions, and precautions. Alongside the advantages, there were many limitations and risks. The study revealed a perceived need for precautions to be embedded in healthcare policies to counter the risks discussed. These precautions need to be multi-dimensional.

Conclusion: The authors conclude that AI should be rationally guided, function transparently, and produce impartial results. It should assist human healthcare professionals collaboratively. This kind of AI will permit fairer, more innovative healthcare which benefits patients and society whilst preserving human dignity. It can foster accuracy and precision in medical practice and reduce the workload by assisting physicians during clinical tasks. AIIM that functions transparently and respects the public interest can be an inspiring scientific innovation for humanity.

Here are some thoughts:

The integration of Artificial Intelligence (AI) in healthcare presents a complex landscape of potential benefits and significant ethical concerns. On one hand, AI offers advantages such as error reduction, increased diagnostic speed, and the potential to alleviate the workload of healthcare professionals, allowing them more time for complex cases and patient interaction. These advancements could lead to improved patient outcomes and more efficient healthcare delivery.

However, ethical issues loom large. Privacy is a paramount concern, as the sensitive nature of patient data necessitates robust security measures to prevent misuse. The question of responsibility in AI-driven decision-making is also fraught with ambiguity, raising legal and ethical dilemmas about accountability in case of errors.

There is a legitimate fear of unemployment among healthcare professionals, though it is more about AI augmenting rather than replacing human capabilities. The human touch in medicine, encompassing empathy and trust-building, is irreplaceable and must be preserved.

Education and regulation are crucial for the ethical integration of AI. Healthcare professionals and patients need to understand AI's role and limitations, with clear rules to ensure ethical use. Bias in AI algorithms, potentially exacerbating health disparities, must be addressed through diverse development teams and continuous monitoring.

Transparency is essential for trust, with patients informed about AI's role in their care and doctors capable of explaining AI decisions. Legal implications, such as data ownership and patient consent, require policy attention.

Economically, AI could enhance healthcare efficiency, but its impact on costs and accessibility needs careful consideration. International collaboration is vital for uniform standards and fairness globally.

Tuesday, February 4, 2025

Advancing AI Data Ethics in Nursing: Future Directions for Nursing Practice, Research, and Education

Dunlap, P. a. B., & Michalowski, M. (2024).
JMIR Nursing, 7, e62678.

Abstract

The ethics of artificial intelligence (AI) are increasingly recognized due to concerns such as algorithmic bias, opacity, trust issues, data security, and fairness. Specifically, machine learning algorithms, central to AI technologies, are essential in striving for ethically sound systems that mimic human intelligence. These technologies rely heavily on data, which often remain obscured within complex systems and must be prioritized for ethical collection, processing, and usage. The significance of data ethics in achieving responsible AI was first highlighted in the broader context of health care and subsequently in nursing. This viewpoint explores the principles of data ethics, drawing on relevant frameworks and strategies identified through a formal literature review. These principles apply to real-world and synthetic data in AI and machine-learning contexts. Additionally, the data-centric AI paradigm is briefly examined, emphasizing its focus on data quality and the ethical development of AI solutions that integrate human-centered domain expertise. The ethical considerations specific to nursing are addressed, including 4 recommendations for future directions in nursing practice, research, and education and 2 hypothetical nurse-focused ethical case studies. The primary objectives are to position nurses to actively participate in AI and data ethics, thereby contributing to creating high-quality and relevant data for machine learning applications.

Here are some thoughts:

The article explores integrating AI in nursing, focusing on ethical considerations vital to patient trust and care quality. It identifies risks like bias, data privacy issues, and the erosion of human-centered care. The paper argues for interdisciplinary frameworks and education to help nurses navigate these challenges. Ethics ensure AI aligns with professional values, safeguarding equity, autonomy, and informed decision-making. With thoughtful integration, AI can empower nursing while upholding ethical standards.

Thursday, January 30, 2025

Advancements in AI-driven Healthcare: A Comprehensive Review of Diagnostics, Treatment, and Patient Care Integration

Kasula, B. Y. (2024, January 18).
International Journal of Machine Learning
for Sustainable Development.
Volume 6 (1).

Abstract

This research paper presents a comprehensive review of the recent advancements in AI-
driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in
patient care. The study explores the evolution of artificial intelligence applications in medical
imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of
healthcare delivery. Ethical considerations and challenges associated with AI adoption in
healthcare are also discussed. The paper concludes with insights into the potential future
developments and the transformative impact of AI on the healthcare landscape.


Here are some thoughts:

This research paper provides a comprehensive review of recent advancements in AI-driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in patient care. The study explores the evolution of artificial intelligence applications in medical imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of healthcare delivery. It discusses the transformative impact of AI on healthcare, highlighting key achievements, challenges, and ethical considerations associated with its widespread adoption.

The paper examines AI's role in improving diagnostic accuracy, particularly in medical imaging, and its contribution to developing personalized treatment plans. It also addresses the ethical dimensions of AI in healthcare, including patient privacy, data security, and equitable distribution of AI-driven healthcare benefits. The research emphasizes the need for a holistic approach to AI integration in healthcare, calling for collaboration between healthcare professionals, technologists, and policymakers to navigate the evolving landscape successfully.

It is important for psychologists to understand the content of this article for several reasons. Firstly, AI is increasingly being applied in mental health diagnosis and treatment, as mentioned in the paper's references. Psychologists need to be aware of these advancements to stay current in their field and potentially incorporate AI-driven tools into their practice. Secondly, the ethical considerations discussed in the paper, such as patient privacy and data security, are equally relevant to psychological practice. Understanding these issues can help psychologists navigate the ethical challenges that may arise with the integration of AI in mental health care.

Moreover, the paper's emphasis on personalized medicine and treatment plans is particularly relevant to psychology, where individualized approaches are often crucial. By understanding AI's potential in this area, psychologists can explore ways to enhance their treatment strategies and improve patient outcomes. Lastly, as healthcare becomes increasingly interdisciplinary, psychologists need to be aware of technological advancements in other medical fields to collaborate effectively with other healthcare professionals and provide comprehensive care to their patients.

Saturday, December 14, 2024

Suicides in the US military increased in 2023, continuing a long-term trend

Lolita C. Baldor
Associated Press
Originally posted 14 Nov 24

Suicides in the U.S. military increased in 2023, continuing a long-term trend that the Pentagon has struggled to abate, according to a Defense Department report released on Thursday. The increase is a bit of a setback after the deaths dipped slightly the previous year.

The number of suicides and the rate per 100,000 active-duty service members went up, but that the rise was not statistically significant. The number also went up among members of the Reserves, while it decreased a bit for the National Guard.

Defense Secretary Lloyd Austin has declared the issue a priority, and top leaders in the Defense Department and across the services have worked to develop programs both to increase mental health assistance for troops and bolster education on gun safety, locks and storage. Many of the programs, however, have not been fully implemented yet, and the moves fall short of more drastic gun safety measures recommended by an independent commission.


Here are some thoughts:

The report from the Associated Press focuses on the rise in suicide rates among U.S. military personnel in 2023. Despite efforts by the Pentagon to reduce these numbers, the suicide rate increased, although the rise was not statistically significant. This follows a trend of increasing suicides among active-duty members since 2011.

The article highlights the ongoing efforts to address the problem, including increasing access to mental health care and promoting gun safety measures, but also points to an independent commission's recommendation for more drastic gun safety regulations that have not yet been implemented. The article concludes with the overall trend of suicide rates in the military and among family members of service members, as well as information on how to access mental health support through the 988 Lifeline.

Saturday, November 2, 2024

Medical AI Caught Telling Dangerous Lie About Patient's Medical Record

Victor Tangerman
Futurism.com
Originally posted 28 Sept 24

Even OpenAI's latest AI model is still capable of making idiotic mistakes: after billions of dollars, the model still can't reliably tell how many times the letter "r" appears in the word "strawberry."

And while "hallucinations" — a conveniently anthropomorphizing word used by AI companies to denote bullshit dreamed up by their AI chatbots — aren't a huge deal when, say, a student gets caught with wrong answers in their assignment, the stakes are a lot higher when it comes to medical advice.

A communications platform called MyChart sees hundreds of thousands of messages being exchanged between doctors and patients a day, and the company recently added a new AI-powered feature that automatically drafts replies to patients' questions on behalf of doctors and assistants.

As the New York Times reports, roughly 15,000 doctors are already making use of the feature, despite the possibility of the AI introducing potentially dangerous errors.

Case in point, UNC Health family medicine doctor Vinay Reddy told the NYT that an AI-generated draft message reassured one of his patients that she had gotten a hepatitis B vaccine — despite never having access to her vaccination records.

Worse yet, the new MyChart tool isn't required to divulge that a given response was written by an AI. That could make it nearly impossible for patients to realize that they were given medical advice by an algorithm.


Here are some thoughts:

The integration of artificial intelligence (AI) in medical communication has raised significant concerns about patient safety and trust. Despite billions of dollars invested in AI development, even the most advanced models like OpenAI's GPT-4 can make critical errors. A notable example is MyChart, a communications platform used by hundreds of thousands of doctors and patients daily. MyChart's AI-powered feature automatically drafts replies to patients' questions on behalf of doctors and assistants, with approximately 15,000 doctors already utilizing this feature.

However, this technology poses significant risks. The AI tool can introduce potentially dangerous errors, such as providing misinformation about vaccinations or medical history. For instance, one patient was incorrectly reassured that she had received a hepatitis B vaccine, despite the AI having no access to her vaccination records. Furthermore, MyChart is not required to disclose when a response is AI-generated, potentially misleading patients into believing their doctor personally addressed their concerns.

Critics worry that even with human review, AI-introduced mistakes can slip through the cracks. Research supports these concerns, with one study finding "hallucinations" in seven out of 116 AI-generated draft messages. Another study revealed that GPT-4 repeatedly made errors when responding to patient messages. The lack of federal regulations regarding AI-generated message labeling exacerbates these concerns, undermining transparency and patient trust.

Wednesday, October 30, 2024

Physician Posttraumatic Stress Disorder During COVID-19

Kamra, M., Dhaliwal, S., et al. (2024).
JAMA Network Open, 7(7), e2423316.

Abstract

Importance  The COVID-19 pandemic placed many physicians in situations of increased stress and challenging resource allocation decisions. Insight into the prevalence of posttraumatic stress disorder in physicians and its risk factors during the COVID-19 pandemic will guide interventions to prevent its development.

Objective  To determine the prevalence of posttraumatic stress disorder (PTSD) among physicians during the COVID-19 pandemic and examine variations based on factors, such as sex, age, medical specialty, and career stage.

Data Sources  A Preferred Reporting Items for Systematic Reviews and Meta-analyses–compliant systematic review was conducted, searching MEDLINE, Embase, and PsychInfo, from December 2019 to November 2022. Search terms included MeSH (medical subject heading) terms and keywords associated with physicians as the population and PTSD.

Conclusions and Relevance  In this meta-analysis examining PTSD during COVID-19, 18.3% of physicians reported symptoms consistent with PTSD, with a higher risk in female physicians, older physiciansy, and trainees, and with variation by specialty. Targeted interventions to support physician well-being during traumatic events like pandemics are required.

Key Points

Question  What is the prevalence of posttraumatic stress disorder (PTSD) among physicians during the COVID-19 pandemic, and how does this vary based on factors such as sex?

Findings  In this systematic review and meta-analysis of 57 studies with 28 965 participants, a higher PTSD prevalence among physicians was found compared with the reported literature on the prevalence before the COVID-19 pandemic and the general population. Women and medical trainees were significantly more likely to develop PTSD, and emergency and family medicine specialties tended to report higher prevalence.

Meaning  These findings suggest that physicians were more likely to experience PTSD during the COVID-19 pandemic, which highlights the importance of further research and policy reform to uphold physician wellness practices.

Sunday, October 27, 2024

Care robot literacy: integrating AI ethics and technological literacy in contemporary healthcare

Turja, T., et al.
AI Ethics (2024). 

Abstract

Healthcare work is guided by care ethics, and any technological changes, including the use of robots and artificial intelligence (AI), must comply with existing norms, values and work practices. By bridging technological literacy and AI ethics, this study provides a nuanced definition and an integrative conceptualization of care robot literacy (CRL) for contemporary care work. Robotized care tasks require new orientation and qualifications on the part of employees. CRL is considered as one of these new demands, which requires practitioners to have the resources, skills and understanding necessary to work with robots. This study builds on sociotechnical approach of literacy by highlighting a dynamic relationship of care robotization in which successful human–technology interaction relies on exchanges between the technological and the social. Our findings from directed content analysis and theoretical synthesis of in-demand technological literacy and AI ethics in care work emphasize competencies and situational awareness regarding both using the robot and communicating about the care robot. The initial conceptualization of CRL provides a conceptual framework for future studies, implementation and product development of care robots, drastically differing from studying, implementing and developing robots in general. In searching for technologically sound and ethically compliant solutions, the study advocates for the future significance of context-specific CRL as valuable addition to the terminology of ethical AI in healthcare.

Here are some thoughts:

Healthcare work is fundamentally guided by care ethics, which must be upheld as robots and artificial intelligence (AI) are integrated into care settings. Any technological advancements in healthcare must align with existing norms, values, and work practices to ensure that ethical care delivery is maintained. This highlights the importance of a thoughtful approach to the incorporation of technology in healthcare environments.

A novel concept emerging from this discourse is Care Robot Literacy (CRL), which bridges technological literacy and AI ethics. CRL encompasses the resources, skills, and understanding necessary for healthcare practitioners to work effectively with robots in their care practices. As robotized care tasks require new orientations and qualifications from employees, CRL becomes essential for equipping practitioners with the competencies needed to navigate this evolving landscape.

This study adopts a sociotechnical approach to CRL, emphasizing the dynamic relationship between care robotization and human-technology interaction. Successful integration of robots in healthcare relies on effective exchanges between technological capabilities and social factors. This interplay is crucial for fostering an environment where both patients and practitioners can benefit from technological advancements.

Key components of CRL include practical skills for operating robots and the ability to communicate about their use within care settings. These competencies are vital for ensuring that healthcare workers can not only utilize robotic systems effectively but also articulate their roles and benefits to patients and colleagues alike.

The implications of CRL extend beyond mere technical skills; it serves as a valuable occupational asset that encompasses digital proficiency, ethical awareness, and situational understanding. These elements are critical for supporting patient safety and well-being, particularly in an increasingly automated healthcare environment where the quality of care must remain a top priority.

Looking ahead, the initial conceptualization of CRL provides a framework for future studies, implementation strategies, and product development specific to care robots. As healthcare seeks technologically sound and ethically compliant solutions, CRL is positioned to become an integral part of the terminology and practice surrounding ethical AI in healthcare. 

Friday, October 25, 2024

Remember That DNA You Gave 23andMe?

Kristen V. Brown
The Atlantic
Originally published 27 Sept 24

23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company’s CEO. Amid this downward spiral, Wojcicki has said she’ll consider selling 23andMe—which means the DNA of 23andMe’s 15 million customers would be up for sale, too.

23andMe’s trove of genetic data might be its most valuable asset. For about two decades now, since human-genome analysis became quick and common, the A’s, C’s, G’s, and T’s of DNA have allowed long-lost relatives to connect, revealed family secrets, and helped police catch serial killers. Some people’s genomes contain clues to what’s making them sick, or even, occasionally, how their disease should be treated. For most of us, though, consumer tests don’t have much to offer beyond a snapshot of our ancestors’ roots and confirmation of the traits we already know about. (Yes, 23andMe, my eyes are blue.) 23andMe is floundering in part because it hasn’t managed to prove the value of collecting all that sensitive, personal information. And potential buyers may have very different ideas about how to use the company’s DNA data to raise the company’s bottom line. This should concern anyone who has used the service.


Here are some thoughts:

Privacy and Data Security

The potential sale of 23andMe, including its vast database of genetic information from 15 million customers, is deeply troubling from a privacy perspective. Genetic data is highly sensitive and personal, containing information not just about individuals but also their relatives. The fact that this data could change hands without clear protections or consent from customers is alarming.

Consent and Transparency

23andMe's privacy policies allow for changes in data usage terms, which means customers who provided DNA samples under one set of expectations may find their data used in ways they never anticipated or agreed to. This lack of long-term control over one's genetic information raises serious questions about informed consent.

Commercialization of Personal Health Data

The company's struggle to monetize its genetic database highlights the ethical challenges of commercializing personal health information. While genetic data can be valuable for medical research and drug development, using it primarily as a financial asset rather than for the benefit of individuals or public health is ethically questionable.

Regulatory Gaps

Unlike traditional healthcare providers, 23andMe is not bound by HIPAA regulations, leaving a significant gap in legal protections for this sensitive data. This regulatory vacuum underscores the need for updated laws that address the unique challenges posed by large-scale genetic databases.

Implications and Conclusion

The potential sale of 23andMe sets a concerning precedent for how genetic data might be treated in corporate transactions. It raises questions about the long-term security and use of personal genetic information, especially as our understanding of genetics and its applications in healthcare continue to evolve.

In conclusion, the 23andMe situation serves as a stark reminder of the complex ethical landscape surrounding genetic testing and data. It highlights the urgent need for stronger regulations, more transparent practices, and a broader societal discussion about the appropriate use and protection of genetic information.

Tuesday, October 22, 2024

Pennsylvania health system agrees to $65 million settlement after hackers leaked nude photos of cancer patients

Sean Lyngass
CNN.com
Originally posted 23 Sept 24

A Pennsylvania health care system this month agreed to pay $65 million to victims of a February 2023 ransomware attack after hackers posted nude photos of cancer patients online, according to the victims’ lawyers.

It’s the largest settlement of its kind in terms of per-patient compensation for victims of a cyberattack, according to Saltz Mongeluzzi Bendesky, a law firm that for the plaintiffs.

The settlement, which is subject to approval by a judge, is a warning to other big US health care providers that the most sensitive patient records they hold are of enormous value to both hackers and the patients themselves, health care cyber experts told CNN. Eighty percent of the $65-million settlement is set aside for victims whose nude photos were published online.

The settlement “shifts the legal, insurance and adversarial ecosystem,” said Carter Groome, chief executive of cybersecurity firm First Health Advisory. “If you’re protecting health data as a crown jewel — as you should be — images or photos are going to need another level of compartmentalized protection.”

It’s a potentially continuous cycle where hackers increasingly seek out the most sensitive patient data to steal, and health care providers move to settle claims out of courts to avoid “ongoing reputational harm,” Groome told CNN.

According to the lawsuit, a cybercriminal gang stole nude photos of cancer patients last year from Lehigh Valley Health Network, which comprises 15 hospitals and health centers in eastern Pennsylvania. The hackers demanded a ransom payment and when Lehigh refused to pay, they leaked the photos online.

The lawsuit, filed on behalf of a Pennsylvania woman and others whose nude photos were posted online, said that Lehigh Valley Health Network needed to be held accountable “for the embarrassment and humiliation” it had caused plaintiffs.

“Patient, physician, and staff privacy is among our top priorities, and we continue to enhance our defenses to prevent incidents in the future,” Lehigh Valley Health Network said in a statement to CNN on Monday.


Here are some thoughts:

The ransomware attack on Lehigh Valley Health Network raises significant ethical and healthcare concerns. The exposure of nude photos of cancer patients is a profound breach of trust and privacy, causing significant emotional distress and psychological harm. Healthcare providers have a duty of care to protect patient data and must be held accountable for their failure to do so. The decision to pay a ransom is ethically complex, as it can incentivize further attacks and potentially jeopardize patient safety. The frequency and severity of ransomware attacks highlight the urgent need for stronger cybersecurity measures in the healthcare sector. By addressing these ethical and practical considerations, healthcare organizations can better safeguard patient information and ensure the delivery of high-quality care.

Saturday, August 24, 2024

The impact of generative artificial intelligence on socioeconomic inequalities and policy making

Capraro, V., Lentsch, A., et al. (2024).
PNAS Nexus, 3(6).

Abstract

Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.

Here are some thoughts:

Generative AI stands to radically reshape society, yet its ultimate impact hinges on our choices. This powerful technology offers immense potential for improving information access, education, and healthcare. However, it also poses significant risks, including job displacement, increased inequality, and the spread of misinformation. To fully harness AI's benefits while mitigating its drawbacks, we must urgently address critical research questions and develop a robust regulatory framework. The decisions we make today about AI will have far-reaching consequences for generations to come.

Sunday, June 9, 2024

Artificial Intelligence Feedback on Physician Notes Improves Patient Care

NYU Langone Health
Research, Innovation
Originally posted 17 APR 24

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients’ future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors’ clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors’ notes achieved the “5 Cs”: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients’ future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions. In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.


The article is linked above.  Here is the abstract:

Abstract

Electronic health records have become an integral part of modern health care, but their implementation has led to unintended consequences, such as poor note quality. This case study explores how NYU Langone Health leveraged artificial intelligence (AI) to address the challenge to improve the content and quality of medical documentation. By quickly and accurately analyzing large volumes of clinical documentation and providing feedback to organizational leadership and individually to providers, AI can help support a culture of continuous note quality improvement, allowing organizations to enhance a critical component of patient care.

Saturday, June 8, 2024

A Doctor at Cigna Said Her Bosses Pressured Her to Review Patients’ Cases Too Quickly

P. Rucker and D. Armstrong
Propublica.org
Originally posted 29 APR 24

Here is an excerpt:

As ProPublica and The Capitol Forum reported last year, Cigna built a computer program that allowed its medical directors to deny certain claims in bulk. The insurer’s doctors spent an average of just 1.2 seconds on each of those cases. Cigna at the time said the review system was created to speed up approval of claims for certain routine screenings; the company later posted a rebuttal to the story. A congressional committee and the Department of Labor launched inquiries into this Cigna program. A spokesperson for Rep. Cathy McMorris Rodgers, the chair of the congressional committee, said Rodgers continues to monitor the situation after Cigna shared some details about its process. The Labor Department is still examining such practices.

One figure on Cigna’s January and February 2022 dashboards was like a productivity score; the news organizations found that this number reflects the pace at which a medical director clears cases.

Cigna said it was incorrect to call that figure on its dashboard a productivity score and said its “view on productivity is defined by a range of factors beyond elements included in a single spreadsheet.” In addition, the company told the news organizations, “The copy of the dashboard that you have is inaccurate and secondary calculations made using its contents may also be inaccurate.” The news organizations asked what was inaccurate, but the company wouldn’t elaborate.

Nevertheless, Cigna said that because the dashboard created “inadvertent confusion” the company was “reassessing its use.”


Here is my summary:

The article reports on Dr. Debby Day, who alleges that Cigna, her employer, pressured her to prioritize speed over thoroughness when reviewing patients' requests for healthcare coverage.

According to Day, managers emphasized meeting quotas and processing claims quickly, even if it meant superficially reviewing cases. Dr. Day said Cigna expected medical directors to review cases in as little as 4 minutes, which she felt was too rushed to properly evaluate them.  The pressure to deny claims quickly was nicknamed "click and close" by some employees.

Day felt this practice compromised patient care and refused to expedite reviews at the expense of quality. The article suggests this may have led to threats of termination from Cigna.

Sunday, May 12, 2024

How patients experience respect in healthcare: findings from a qualitative study among multicultural women living with HIV

Fernandez, S.B., Ahmad, A., Beach, M.C. et al.
BMC Med Ethics 25, 39 (2024).

Abstract

Background
Respect is essential to providing high quality healthcare, particularly for groups that are historically marginalized and stigmatized. While ethical principles taught to health professionals focus on patient autonomy as the object of respect for persons, limited studies explore patients’ views of respect. The purpose of this study was to explore the perspectives of a multiculturally diverse group of low-income women living with HIV (WLH) regarding their experience of respect from their medical physicians.

Methods
We analyzed 57 semi-structured interviews conducted at HIV case management sites in South Florida as part of a larger qualitative study that explored practices facilitating retention and adherence in care. Women were eligible to participate if they identified as African American (n = 28), Hispanic/Latina (n = 22), or Haitian (n = 7). They were asked to describe instances when they were treated with respect by their medical physicians. Interviews were conducted by a fluent research interviewer in either English, Spanish, or Haitian Creole, depending on participant’s language preference. Transcripts were translated, back-translated and reviewed in entirety for any statements or comments about “respect.” After independent coding by 3 investigators, we used a consensual thematic analysis approach to determine themes.

Results
Results from this study grouped into two overarching classifications: respect manifested in physicians’ orientation towards the patient (i.e., interpersonal behaviors in interactions) and respect in medical professionalism (i.e., clinic procedures and practices). Four main themes emerged regarding respect in provider’s orientation towards the patient: being treated as a person, treated as an equal, treated without blame or prejudice, and treated with concern/emotional support. Two main themes emerged regarding respect as evidenced in medical professionalism: physician availability and considerations of privacy.

Conclusions
Findings suggest a more robust conception of what ‘respect for persons’ entails in medical ethics for a diverse group of low-income women living with HIV. Findings have implications for broadening areas of focus of future bioethics education, training, and research to include components of interpersonal relationship development, communication, and clinic procedures. We suggest these areas of training may increase respectful medical care experiences and potentially serve to influence persistent and known social and structural determinants of health through provider interactions and health care delivery.


Here is my summary:

The study explored how multicultural women living with HIV experience respectful treatment in healthcare settings.  Researchers found that these women define respect in healthcare as feeling like a person, not just a disease statistic, and being treated as an equal partner in their care. This includes being listened to, having their questions answered, and being involved in decision-making.  The study also highlighted the importance of providers avoiding judgment and blame, and showing concern for the emotional well-being of patients.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.