Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, January 24, 2025

Ethical Considerations for Using AI to Predict Suicide Risk

Faith Wershba
The Hastings Center
Originally published 9 Dec 24

Those who have lost a friend or family member to suicide frequently express remorse that they did not see it coming. One often hears, “I wish I would have known” or “I wish I could have done something to help.” Suicide is one of the leading causes of death in the United States, and with suicide rates rising, the need for effective screening and prevention strategies is urgent.

Unfortunately, clinician judgement has not proven very reliable when it comes to predicting patients’ risk of attempting suicide. A 2016 meta-analysis from the American Psychological Association concluded that, on average, clinicans’ ability to predict suicide risk was no better than chance. Predicting suicide risk is a complex and high-stakes task, and while there are a number of known risk factors that correlate with suicide attempts at the population level, the presence or absence of a given risk factor may not reliably predict an individual’s risk of attempting suicide. Moreover, there are likely unknown risk factors that interact to modify risk. For these reasons, patients who qualify as high-risk may not be identified by existing assessments.

Can AI do better? Some researchers are trying to find out by turning towards big data and machine learning algorithms. These algorithms are trained on medical records from large cohorts of patients who have either attempted or committed suicide (“cases”) or who have never attempted suicide (“controls”). An algorithm combs through this data to identify patterns and extract features that correlate strongly with suicidality, updating itself continuously to increase predictive accuracy. Once the algorithm has been sufficiently trained and refined on test data, the hope is that it can be applied to predict suicide risk in individual patients.


Here are some thoughts:

The article explores the potential benefits and ethical challenges associated with leveraging artificial intelligence (AI) in suicide risk assessment. AI algorithms, which analyze extensive patient data to identify patterns indicating heightened suicide risk, hold promise for enhancing early intervention efforts. However, the integration of AI into clinical practice raises significant ethical and practical considerations that psychologists must navigate.

One critical concern is the accuracy and reliability of AI predictions. While AI has demonstrated potential in identifying suicide risk, its outputs are not infallible. Overreliance on AI without applying clinical judgment may result in false positives or negatives, potentially undermining the quality of care provided to patients. Psychologists must balance AI insights with their expertise to ensure accurate and ethical decision-making.

Informed consent and respect for patient autonomy are also paramount. Transparency about how AI tools are used and obtaining explicit consent from patients ensures trust and adherence to ethical principles. 

Bias and fairness represent another challenge, as AI algorithms can reflect biases present in the training data. These biases may lead to unequal treatment of different demographic groups, necessitating ongoing monitoring and adjustments to ensure equitable care. Furthermore, AI should be viewed as a tool to complement, not replace, the clinical judgment of psychologists. Integrating AI insights into a holistic approach to care is critical for addressing the complexities of suicide risk.

Finally, the use of AI raises questions about legal and ethical accountability. Determining responsibility for decisions influenced by AI predictions requires clear guidelines and policies. Psychologists must remain vigilant in ensuring that AI use aligns with both ethical standards and the best interests of their patients.