Rebecca Robbins
statnews.com
Originally posted 1 July 20
Here is an excerpt:
The architects of Stanford’s system wanted to avoid distracting or confusing clinicians with a prediction that may not be accurate — which is why they decided against including the algorithm’s assessment of the odds that a patient will die in the next 12 months.
“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” said Ron Li, a Stanford physician and clinical informaticist who is one of the leaders of the rollout there.
After a pilot over the course of a few months last winter, Stanford plans to introduce the tool this summer as part of normal workflow; it will be used not just by physicians like Wang, but also by occupational therapists and social workers who care for and talk with seriously ill patients with a range of medical conditions.
All those design choices and procedures build up to the most important part of the process: the actual conversation with the patient.
Stanford and Penn have trained their clinicians on how to approach these discussions using a guide developed by Ariadne Labs, the organization founded by the author-physician Atul Gawande. Among the guidance to clinicians: Ask for the patient’s permission to have the conversation. Check how well the patient understands their current state of health.
And don’t be afraid of long moments of silence.
There’s one thing that almost never gets brought up in these conversations: the fact that the discussion was prompted, at least in part, by an AI.
Researchers and clinicians say they have good reasons for not mentioning it.
”To say a computer or a math equation has predicted that you could pass away within a year would be very, very devastating and would be really tough for patients to hear,” Stanford’s Wang said.
The info is here.