Earp, B. D., et al.
The American Journal of Bioethics, 24(7),
13–26.
Abstract
When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient’s (former) autonomy since it draws on the ‘wrong’ kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently ‘fine-tuned’ on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient’s preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient’s own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.
Here are some thoughts:
This article introduces the concept of a Personalized Patient Preference Predictor (P4), an advanced version of the previously proposed Patient Preference Predictor (PPP). The P4 is designed to address the challenges of making substituted judgments for incapacitated patients in healthcare settings. Unlike the PPP, which relies on population-level data to predict patient preferences, the P4 utilizes machine learning and large language models to analyze person-specific data, such as prior treatment decisions and digital footprints, to more accurately infer an individual patient's preferences.
The authors argue that the P4 is both technically feasible and ethically desirable, as it addresses some of the main criticisms of the original PPP. By using individual-specific data, the P4 aims to better reflect each patient's own reasons and values, potentially improving the accuracy of substituted judgments while respecting patient autonomy. The article discusses the technical aspects of implementing a P4, including the use of advanced AI technologies, and considers various ethical objections and potential responses.
It is important for psychologists to understand the content of this article for several reasons. First, the P4 represents a significant advancement in the field of medical decision-making for incapacitated patients, which has implications for patient care, autonomy, and mental health. Psychologists working in healthcare settings may encounter situations where such tools could be valuable in guiding treatment decisions. Second, the ethical considerations surrounding the use of AI and machine learning in healthcare decision-making are crucial for psychologists to grasp, as they may be called upon to contribute to discussions about the implementation and use of such technologies. Finally, understanding the potential of personalized predictive models like the P4 could inform psychological research and practice, particularly in areas related to decision-making, patient preferences, and the intersection of technology and mental health care