Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, October 23, 2024

Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?

Hatherley, J. (2024).
Journal of Medical Ethics, jme-109905.
https://doi.org/10.1136/jme-2024-109905

Abstract

It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.


Here are some thoughts:

The ethical obligation for clinicians to disclose their use of medical machine learning (ML) systems—known as the 'disclosure thesis'—is widely accepted in healthcare. However, this presentation challenges the validity of this thesis by critically examining four main arguments that support it: the risk-based, rights-based, materiality, and autonomy arguments. Each of these arguments has significant shortcomings. 

The risk-based argument suggests that disclosure mitigates risks associated with ML systems, but it does not adequately address the complexities of risk management in clinical practice. The rights-based argument posits that patients have a right to know about ML usage, yet this right may not translate into meaningful patient understanding or improved outcomes. Similarly, the materiality argument claims that disclosure is necessary for informed consent, but it risks overwhelming patients with information that might not be actionable. Lastly, the autonomy argument asserts that disclosure enhances patient autonomy; however, it could inadvertently diminish autonomy by creating a false sense of security.

The article concludes that mandating disclosure may lead to unintended consequences, such as reducing accountability for harm resulting from improper ML applications. Clinicians and stakeholders might misuse disclosure as a protective measure against responsibility, thus failing to address the underlying issues. Moving forward, the focus should shift from mere disclosure to establishing robust accountability frameworks that genuinely protect patients and foster meaningful understanding of the technologies involved.