Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Quality Care. Show all posts
Showing posts with label Quality Care. Show all posts

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Wednesday, August 9, 2023

The Moral Crisis of America’s Doctors

Wendy Dean & Elisabeth Rosenthal
The New York Times
Orignally posted 15 July 23

Here is an excerpt:

Some doctors acknowledged that the pressures of the system had occasionally led them to betray the oaths they took to their patients. Among the physicians I spoke to about this, a 45-year-old critical-care specialist named Keith Corl stood out. Raised in a working-class town in upstate New York, Corl was an idealist who quit a lucrative job in finance in his early 20s because he wanted to do something that would benefit people. During medical school, he felt inspired watching doctors in the E.R. and I.C.U. stretch themselves to the breaking point to treat whoever happened to pass through the doors on a given night. “I want to do that,” he decided instantly. And he did, spending nearly two decades working long shifts as an emergency physician in an array of hospitals, in cities from Providence to Las Vegas to Sacramento, where he now lives. Like many E.R. physicians, Corl viewed his job as a calling. But over time, his idealism gave way to disillusionment, as he struggled to provide patients with the type of care he’d been trained to deliver. “Every day, you deal with somebody who couldn’t get some test or some treatment they needed because they didn’t have insurance,” he said. “Every day, you’re reminded how savage the system is.”

Corl was particularly haunted by something that happened in his late 30s, when he was working in the emergency room of a hospital in Pawtucket, R.I. It was a frigid winter night, so cold you could see your breath. The hospital was busy. When Corl arrived for his shift, all of the facility’s E.R. beds were filled. Corl was especially concerned about an elderly woman with pneumonia who he feared might be slipping into sepsis, an extreme, potentially fatal immune response to infection. As Corl was monitoring her, a call came in from an ambulance, informing the E.R. staff that another patient would soon be arriving, a woman with severe mental health problems. The patient was familiar to Corl — she was a frequent presence in the emergency room. He knew that she had bipolar disorder. He also knew that she could be a handful. On a previous visit to the hospital, she detached the bed rails on her stretcher and fell to the floor, injuring a nurse.

In a hospital that was adequately staffed, managing such a situation while keeping tabs on all the other patients might not have been a problem. But Corl was the sole doctor in the emergency room that night; he understood this to be in part a result of cost-cutting measures (the hospital has since closed). After the ambulance arrived, he and a nurse began talking with the incoming patient to gauge whether she was suicidal. They determined she was not. But she was combative, arguing with the nurse in an increasingly aggressive tone. As the argument grew more heated, Corl began to fear that if he and the nurse focused too much of their attention on her, other patients would suffer needlessly and that the woman at risk of septic shock might die.

Corl decided he could not let that happen. Exchanging glances, he and the nurse unplugged the patient from the monitor, wheeled her stretcher down the hall, and pushed it out of the hospital. The blast of cold air when the door swung open caused Corl to shudder. A nurse called the police to come pick the patient up. (It turned out that she had an outstanding warrant and was arrested.) Later, after he returned to the E.R., Corl could not stop thinking about what he’d done, imagining how the medical-school version of himself would have judged his conduct. “He would have been horrified.”


Summary: The article explores the moral distress that many doctors are experiencing in the United States healthcare system. Doctors are feeling increasingly pressured to make decisions based on financial considerations rather than what is best for their patients. This is leading to a number of problems, including:
  • Decreased quality of care: Doctors are being forced to cut corners on care, which is leading to worse outcomes for patients.
  • Increased burnout: Doctors are feeling increasingly stressed and burned out, which is making it difficult for them to provide quality care.
  • Loss of moral compass: Doctors are feeling like they are losing their moral compass, as they are being forced to make decisions that they know are not in the best interests of their patients.
The article concludes by calling for a number of reforms to the healthcare system, including:
  • Paying doctors based on quality of care, not volume of services: This would incentivize doctors to provide the best possible care, rather than just the most profitable care.
  • Giving doctors more control over their practice:This would allow doctors to make decisions based on what is best for their patients, rather than what is best for their employers.
  • Supporting doctors' mental health: Doctors need to be supported through the challenges of providing care in the current healthcare system.

Sunday, July 2, 2023

Predictable, preventable medical errors kill thousands yearly. Is it getting any better?

Karen Weintraub
USAToday.com
Originally posted 3 May 23

Here are two excerpts:

A 2017 study put the figure at over 250,000 a year, making medical errors the nation's third leading cause of death at the time. There are no more recent figures.

But the pandemic clearly worsened patient safety, with Leapfrog's new assessment showing increases in hospital-acquired infections, including urinary tract and drug-resistant staph infections as well as infections in central lines ‒ tubes inserted into the neck, chest, groin, or arm to rapidly provide fluids, blood or medications. These infections spiked to a 5-year high during the pandemic and remain high.

"Those are really terrible declines in performance," Binder said.

Patient safety: 'I've never ever, ever seen that'

Not all patient safety news is bad. In one study published last year, researchers examined records from 190,000 patients discharged from hospitals nationwide after being treated for a heart attack, heart failure, pneumonia or major surgery. Patients saw far fewer bad events following treatment for those four conditions, as well as for adverse events caused by medications, hospital-acquired infections, and other factors.

It was the first study of patient safety that left Binder optimistic. "This was improvement and I've never ever, ever seen that," she said.

(cut)

On any given day now, 1 of every 31 hospitalized patients acquires an infection while hospitalized, according to a recent study from the Centers for Disease Control and Prevention. This costs health care systems at least $28.4 billion each year and accounts for an additional $12.4 billion from lost productivity and premature deaths.

"That blew me away," said Shaunte Walton, system director of Clinical Epidemiology & Infection Prevention at UCLA Health. Electronic tools can help, but even with them, "there's work to do to try to operationalize them," she said.

The patient experience also slipped during the pandemic. According to Leapfrog's latest survey, patients reported declines in nurse communication, doctor communication, staff responsiveness, communication about medicine and discharge information.

Boards and leadership teams are "highly distracted" right now with workforce shortages, new payment systems, concerns about equity and decarbonization, said Dr. Donald Berwick, president emeritus and senior fellow at the Institute for Healthcare Improvement and former administrator of the Centers for Medicare & Medicaid Services.

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Saturday, May 20, 2023

ChatGPT Answers Beat Physicians' on Info, Patient Empathy, Study Finds

Michael DePeau-Wilson
MedPage Today
Originally published 28 April 23

The artificial intelligence (AI) chatbot ChatGPT outperformed physicians when answering patient questions, based on quality of response and empathy, according to a cross-sectional study.

Of 195 exchanges, evaluators preferred ChatGPT responses to physician responses in 78.6% (95% CI 75.0-81.8) of the 585 evaluations, reported John Ayers, PhD, MA, of the Qualcomm Institute at the University of California San Diego in La Jolla, and co-authors.

The AI chatbot responses were given a significantly higher quality rating than physician responses (t=13.3, P<0.001), with the proportion of responses rated as good or very good quality (≥4) higher for ChatGPT (78.5%) than physicians (22.1%), amounting to a 3.6 times higher prevalence of good or very good quality responses for the chatbot, they noted in JAMA Internal Medicine in a new tab or window.

Furthermore, ChatGPT's responses were rated as being significantly more empathetic than physician responses (t=18.9, P<0.001), with the proportion of responses rated as empathetic or very empathetic (≥4) higher for ChatGPT (45.1%) than for physicians (4.6%), amounting to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

"ChatGPT provides a better answer," Ayers told MedPage Today. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

He said they were trying to figure out how ChatGPT, developed by OpenAI, could potentially help resolve the burden of answering patient messages for physicians, which he noted is a well-documented contributor to burnout.

Ayers said that he approached this study with his focus on another population as well, pointing out that the burnout crisis might be affecting roughly 1.1 million providers across the U.S., but it is also affecting about 329 million patients who are engaging with overburdened healthcare professionals.

(cut)

"Physicians will need to learn how to integrate these tools into clinical practice, defining clear boundaries between full, supervised, and proscribed autonomy," he added. "And yet, I am cautiously optimistic about a future of improved healthcare system efficiency, better patient outcomes, and reduced burnout."

After seeing the results of this study, Ayers thinks that the research community should be working on randomized controlled trials to study the effects of AI messaging, so that the future development of AI models will be able to account for patient outcomes.

Monday, October 31, 2022

Longest Strike Ends: California Mental Health Care Workers Win Big

Cal Wilslow
Counterpunch.org
Originally posted 24 OCT 22

Two thousand mental health clinicians have won; Kaiser Permanente has lost. The 10- week strike has ended in near total victory for the National Union of Healthcare Workers (NUHW). The therapists, walked out on August 15; it became the longest mental health care workers’ strike recorded.

Two issues dominated negotiations from the start: workload for Kaiser therapists and wait time for Kaiser patients. The strikers won on both, forcing concessions until now all but unheard of. The strikers won break through provisions to retain staff, reduce wait times for patients and a plan to collaborate on transforming Kaiser’s model for providing mental health care. The new four-year contract is retroactive to September 2021 and expires in September 2025. Darrell Steinberg, Mayor of Sacramento served as a mediator. Members of the NUHW voted 1561 to 36 to ratify it.

Braving three- digit heat, strikers walked picket lines throughout Northern California and the Central Valley. They picketed, marched and rallied at Kaiser hospitals – in a strike that caught the attention of mental health care advocates everywhere. “Our strike was difficult and draining, but it was worth it,” said Natalie Rogers, a therapist for Kaiser in Santa Rosa. We stood up to the biggest nonprofit in the nation, and we made gains that will help better serve our patients and will advance the cause of mental health parity throughout the country.”

The mental health clinicians I’ve met are almost universally modest and careful in their choice of words, and here is an example. To say that that Kaiser is “the biggest non-profit” is an understatement to say the least – its revenues are in the billions, and its managers make millions while this giant among giants, typically in the world of corporate health care, oversees its empire as if it were making cars and trucks.

I’ve seen NUHW rallies well-attended by patients themselves, also family members and supporters who are angry, bitter. Where frequently they carry signs to the effect that the issues here are life and death, rallies where speakers break down in tears, where placards tell us that suicide can be the outcome of care denied – “Stop the Suicides!” It’s a wonder more therapists don’t move on. The world of pain of the mental health patient can be just as acute as that of the medical patient. Ask a therapist. It’s not that the clinicians don’t want to tell us this.; it’s that, in their own way, they are telling us. It’s why they fight so hard.

Wednesday, January 27, 2021

What One Health System Learned About Providing Digital Services in the Pandemic

Marc Harrison
Harvard Business Review
Originally posted 11 Dec 20

Here are two excerpts:

Lesson 2: Digital care is safer during the pandemic.

A patient who’s tested positive for Covid doesn’t have to go see her doctor or go into an urgent care clinic to discuss her symptoms. Doctors and other caregivers who are providing virtual care for hospitalized Covid patients don’t face increased risk of exposure. They also don’t have to put on personal protective equipment, step into the patient’s room, then step outside and take off their PPE. We need those supplies, and telehealth helps us preserve it.

Intermountain Healthcare’s virtual hospital is especially well-suited for Covid patients. It works like this: In a regular hospital, you come into the ER, and we check you out and think you’re probably going to be okay, but you’re sick enough that we want to monitor you. So, we admit you.

With our virtual hospital — which uses a combination of telemedicine, home health, and remote patient monitoring — we send you home with a technology kit that allows us to check how you’re doing. You’ll be cared for by a virtual team, including a hospitalist who monitors your vital signs around the clock and home health nurses who do routine rounding. That’s working really well: Our clinical outcomes are excellent, our satisfaction scores are through the roof, and it’s less expensive. Plus, it frees up the hospital beds and staff we need to treat our sickest Covid patients.

(cut)

Lesson 4: Digital tools support the direction health care is headed.

Telehealth supports value-based care, in which hospitals and other care providers are paid based on the health outcomes of their patients, not on the amount of care they provide. The result is a greater emphasis on preventive care — which reduces unsustainable health care costs.

Intermountain serves a large population of at-risk, pre-paid consumers, and the more they use telehealth, the easier it is for them to stay healthy — which reduces costs for them and for us. The pandemic has forced payment systems, including the government’s, to keep up by expanding reimbursements for telehealth services.

This is worth emphasizing: If we can deliver care in lower-cost settings, we can reduce the cost of care. Some examples:
  • The average cost of a virtual encounter at Intermountain is $367 less than the cost of a visit to an urgent care clinic, physician’s office, or emergency department (ED).
  • Our virtual newborn ICU has helped us reduce the number of transports to our large hospitals by 65 a year since 2015. Not counting the clinical and personal benefits, that’s saved $350,000 per year in transportation costs.
  • Our internal study of 150 patients in one rural Utah town showed each patient saved an average of $2,000 in driving expenses and lost wages over a year’s time because he or she was able to receive telehealth care close to home. We also avoided pumping 106,460 kilograms of CO2 into the environment — and (per the following point) the town’s 24-bed hospital earned $1.6 million that otherwise would have shifted to a larger hospital in a bigger town.

Thursday, January 7, 2021

How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951. 

Abstract

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.

AI Risks in Health Care

Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.

A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Saturday, December 15, 2018

What is ‘moral distress’? A narrative synthesis of the literature

Georgina Morley, Jonathan Ives, Caroline Bradbury-Jones, & Fiona Irvine
Nursing Ethics
First Published October 8, 2017 Review Article  

Introduction

The concept of moral distress (MD) was introduced to nursing by Jameton who defined MD as arising, ‘when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action’. MD has subsequently gained increasing attention in nursing research, the majority of which conducted in North America but now emerging in South America, Europe, the Middle East and Asia. Studies have highlighted the deleterious effects of MD, with correlations between higher levels of MD, negative perceptions of ethical climate and increased levels of compassion fatigue among nurses. Consensus is that MD can negatively impact patient care, causing nurses to avoid certain clinical situations and ultimately leave the profession. MD is therefore a significant problem within nursing, requiring investigation, understanding, clarification and responses. The growing body of MD research, however, is arguably failing to bring the required clarification but rather has complicated attempts to study it. The increasing number of cited causes and effects of MD means the term has expanded to the point that according to Hanna and McCarthy and Deady, it is becoming an ‘umbrella term’ that lacks conceptual clarity referring unhelpfully to a wide range of phenomena and causes. Without, however, a coherent and consistent conceptual understanding, empirical studies of MD’s prevalence, effects, and possible responses are likely to be confused and contradictory.

A useful starting point is a systematic exploration of existing literature to critically examine definitions and understandings currently available, interrogating their similarities, differences, conceptual strengths and weaknesses. This article presents a narrative synthesis that explored proposed necessary and sufficient conditions for MD, and in doing so, this article also identifies areas of conceptual tension and agreement.

Wednesday, November 14, 2018

Keeping Human Stories at the Center of Health Care

M. Bridget Duffy
Harvard Business Review
Originally published October 8, 2018

Here is an excerpt:

A mentor told me early in my career that only 20% of healing involves the high-tech stuff. The remaining 80%, he said, is about the relationships we build with patients, the physical environments we create, and the resources we provide that enable patients to tap into whatever they need for spiritual sustenance. The longer I work in health care, the more I realize just how right he was.

How do we get back to the 80-20 rule? By placing the well-being of patients and care teams at the top of the list for every initiative we undertake and every technology we introduce. Rather than just introducing technology with no thought as to its impact on clinicians — as happened with many rollouts of electronic medical records (EMRs) — we need to establish a way to quantifiably measure whether a new technology actually improves a clinician’s workday and ability to deliver care or simply creates hassles and inefficiency. Let’s develop an up-front “technology ROI” that measures workflow impact, inefficiency, hassle and impact on physician and nurse well-being.

The National Taskforce for Humanity in Healthcare, of which I am a founding member, is piloting a system of metrics for well-being developed by J. Bryan Sexton of Duke University Medical Center. Instead of measuring burnout or how broken health care people are, Dr. Sexton’s metrics focus on emotional thriving and emotional resilience. (The former are how strongly people agree or disagree to these statements: “I have a chance to use my strengths every day at work,” “I feel like I am thriving at my job,” “I feel like I am making a meaningful difference at my job,” and “I often have something that I am very looking forward to at my job.”

The info is here.

Saturday, November 10, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction

Maria Panagioti, Keith Geraghty, Judith Johnson
JAMA Intern Med. 2018;178(10):1317-1330.
doi:10.1001/jamainternmed.2018.3713

Abstract

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Study Selection  Quantitative observational studies.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.