Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Safety. Show all posts
Showing posts with label Safety. Show all posts

Tuesday, April 23, 2019

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Wednesday, April 10, 2019

FDA Chief Scott Gottlieb Calls for Tighter Regulations on Electronic Health Records

Fred Schulte and Erika Fry
Fortune.com
Originally posted March 21, 2019

Food and Drug Administration Commissioner Scott Gottlieb on Wednesday called for tighter scrutiny of electronic health records systems, which have prompted thousands of reports of patient injuries and other safety problems over the past decade.

“What we really need is a much more tailored approach, so that we have appropriate oversight of EHRs when they’re doing things that could create risk for patients,” Gottlieb said in an interview with Kaiser Health News.

Gottlieb was responding to “Botched Operation,” a report published this week by KHN and Fortune. The investigation found that the federal government has spent more than $36 billion over the past 10 years to switch doctors and hospitals from paper to digital records systems. In that time, thousands of reports of deaths, injuries, and near misses linked to EHRs have piled up in databases—including at least one run by the FDA.

The info is here.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Wednesday, March 20, 2019

Israel Approves Compassionate Use of MDMA to Treat PTSD

Ido Efrati
www.haaretz.com
Originally posted February 10, 2019

MDMA, popularly known as ecstasy, is a drug more commonly associated with raves and nightclubs than a therapist’s office.

Emerging research has shown promising results in using this “party drug” to treat patients suffering from post-traumatic stress disorder, and Israel’s Health Ministry has just approved the use of MDMA to treat dozens of patients.

MDMA is classified in Israel as a “dangerous drug”, recreational use is illegal, and therapeutic use of MDMA has yet to be formally approved and is still in clinical trials.

However, this treatment is deemed as “compassionate use,” which allows drugs that are still in development to be made available to patients outside of a clinical trial due to the lack of effective alternatives.

The info is here.

Tuesday, March 5, 2019

Call for retraction of 400 scientific papers amid fears organs came from Chinese prisoners

Melissa Davey
The Guardian
Originally published February 5, 2019

A world-first study has called for the mass retraction of more than 400 scientific papers on organ transplantation, amid fears the organs were obtained unethically from Chinese prisoners.

The Australian-led study exposes a mass failure of English language medical journals to comply with international ethical standards in place to ensure organ donors provide consent for transplantation.

The study was published on Wednesday in the medical journal BMJ Open. Its author, the professor of clinical ethics Wendy Rogers, said journals, researchers and clinicians who used the research were complicit in “barbaric” methods of organ procurement.

“There’s no real pressure from research leaders on China to be more transparent,” Rogers, from Macquarie University in Sydney, said. “Everyone seems to say, ‘It’s not our job’. The world’s silence on this barbaric issue must stop.”

A report published in 2016 found a large discrepancy between official transplant figures from the Chinese government and the number of transplants reported by hospitals. While the government says 10,000 transplants occur each year, hospital data shows between 60,000 to 100,000 organs are transplanted each year. The report provides evidence that this gap is being made up by executed prisoners of conscience.

The info is here.

Sunday, February 17, 2019

Physician burnout now essentially a public health crisis

Priyanka Dayal McCluskey
Boston Globe
Originally posted January 17, 2019

Physician burnout has reached alarming levels and now amounts to a public health crisis that threatens to undermine the doctor-patient relationship and the delivery of health care nationwide, according to a report from Massachusetts doctors to be released Thursday.

The report — from the Massachusetts Medical Society, the Massachusetts Health & Hospital Association, and the Harvard T.H. Chan School of Public Health — portrays a profession struggling with the unyielding demands of electronic health record systems and ever-growing regulatory burdens.

It urges hospitals and medical practices to take immediate action by putting senior executives in charge of physician well-being and by giving doctors better access to mental health services. The report also calls for significant changes to make health record systems more user-friendly.

While burnout has long been a worry in the profession, the report reflects a newer phenomenon — the draining documentation and data entry now required of doctors. Today’s electronic record systems are so complex that a simple task, such as ordering a prescription, can take many clicks.

The info is here.

Thursday, February 14, 2019

Sex talks

Rebecca Kukla
aeon.co
Originally posted February 4, 2019

Communication is essential to ethical sex. Typically, our public discussions focus on only one narrow kind of communication: requests for sex followed by consent or refusal. But notice that we use language and communication in a wide variety of ways in negotiating sex. We flirt and rebuff, express curiosity and repulsion, and articulate fantasies. Ideally, we talk about what kind of sex we want to have, involving which activities, and what we like and don’t like. We settle whether or not we are going to have sex at all, and when we want to stop. We check in with one another and talk dirty to one another during sex. 

In this essay I explore the language of sexual negotiation. My specific interest is in what philosophers call the ‘pragmatics’ of speech. That is, I am less interested in what words mean than I am in how speaking can be understood as a kind of action that has a pragmatic effect on the world. Philosophers who specialise in what is known as ‘speech act theory’ focus on what an act of speaking accomplishes, as opposed to what its words mean. J L Austin developed this way of thinking about the different things that speech can do in his classic book, How To Do Things With Words (1962), and many philosophers of language have developed the idea since.

The info is here.

Happy Valentine's Day

Thursday, January 31, 2019

HHS issues voluntary guidelines amid rise of cyberattacks

Samantha Liss
www.healthcaredive.com
Originally published January 2, 2019

Dive Brief:

  • To combat security threats in the health sector, HHS issued a voluminous report that details ways small, local clinics and large hospital systems alike can reduce their cybersecurity risks. The guidelines are voluntary, so providers will not be required to adopt the practices identified in the report. 
  • The four-volume report is the culmination of work by a task force, convened in May 2017, that worked to identify the five most common threats in the industry and 10 ways to prepare against those threats.  
  • The five most common threats are email phishing attacks, ransomware attacks, loss or theft of equipment or data, accidental or intentional data loss by an insider and attacks against connected medical devices.

Friday, January 25, 2019

Study Links Drug Maker Gifts for Doctors to More Overdose Deaths

Abby Goodnough
The New York Times
Originally posted January 18, 2019

A new study offers some of the strongest evidence yet of the connection between the marketing of opioids to doctors and the nation’s addiction epidemic.

It found that counties where opioid manufacturers offered a large number of gifts and payments to doctors had more overdose deaths involving the drugs than counties where direct-to-physician marketing was less aggressive.

The study, published Friday in JAMA Network Open, said the industry spent about $40 million promoting opioid medications to nearly 68,000 doctors from 2013 through 2015, including by paying for meals, trips and consulting fees. And it found that for every three additional payments that companies made to doctors per 100,000 people in a county, overdose deaths involving prescription opioids there a year later were 18 percent higher.

Even as the opioid epidemic was killing more and more Americans, such marketing practices remained widespread. From 2013 through 2015, roughly 1 in 12 doctors received opioid-related marketing, according to the study, including 1 in 5 family practice doctors.

The info is here.

Thursday, January 17, 2019

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018

Introduction

Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Wednesday, January 9, 2019

'Should we even consider this?' WHO starts work on gene editing ethics

Agence France-Presse
Originally published 3 Dec 2018

The World Health Organization is creating a panel to study the implications of gene editing after a Chinese scientist controversially claimed to have created the world’s first genetically edited babies.

“It cannot just be done without clear guidelines,” Tedros Adhanom Ghebreyesus, the head of the UN health agency, said in Geneva.

The organisation was gathering experts to discuss rules and guidelines on “ethical and social safety issues”, added Tedros, a former Ethiopian health minister.

Tedros made the comments after a medical trial, which was led by Chinese scientist He Jiankui, claimed to have successfully altered the DNA of twin girls, whose father is HIV-positive, to prevent them from contracting the virus.

His experiment has prompted widespread condemnation from the scientific community in China and abroad, as well as a backlash from the Chinese government.

The info is here.

Friday, January 4, 2019

Beyond safety questions, gene editing will force us to deal with a moral quandary

Josephine Johnston
STAT News
Originally published November 29, 2018

Here is an excerpt:

The majority of this criticism is motivated by major concerns about safety — we simply do not yet know enough about the impact of CRISPR-Cas9, the powerful new gene-editing tool, to use it create children. But there’s a second, equally pressing concern mixed into many of these condemnations: that gene-editing human eggs, sperm, or embryos is morally wrong.

That moral claim may prove more difficult to resolve than the safety questions, because altering the genomes of future persons — especially in ways that can be passed on generation after generation — goes against international declarations and conventions, national laws, and the ethics codes of many scientific organizations. It also just feels wrong to many people, akin to playing God.

As a bioethicist and a lawyer, I am in no position to say whether CRISPR will at some point prove safe and effective enough to justify its use in human reproductive cells or embryos. But I am willing to predict that blanket prohibitions on permanent changes to the human genome will not stand. When those prohibitions fall — as today’s announcement from the Second International Summit on Human Genome Editing suggests they will — what ethical guideposts or moral norms should replace them?

The info is here.

Saturday, November 24, 2018

Establishing an AI code of ethics will be harder than people think

Karen Hao
www.technologyreview.com
Originally posted October 21, 2018

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

The info is here.

Sunday, November 11, 2018

Nine risk management lessons for practitioners.

Taube, Daniel O.,Scroppo, Joe,Zelechoski, Amanda D.
Practice Innovations, Oct 04 , 2018

Abstract

Risk management is an essential skill for professionals and is important throughout the course of their careers. Effective risk management blends a utilitarian focus on the potential costs and benefits of particular courses of action, with a solid foundation in ethical principles. Awareness of particularly risk-laden circumstances and practical strategies can promote safer and more effective practice. This article reviews nine situations and their associated lessons, illustrated by case examples. These situations emerged from our experience as risk management consultants who have listened to and assisted many practitioners in addressing the challenges they face on a day-to-day basis. The lessons include a focus on obtaining consent, setting boundaries, flexibility, attention to clinician affect, differentiating the clinician’s own values and needs from those of the client, awareness of the limits of competence, maintaining adequate legal knowledge, keeping good records, and routine consultation. We highlight issues and approaches to consider in these types of cases that minimize risks of adverse outcomes and enhance good practice.

The info is here.

Here is a portion of the article:

Being aware of basic legal parameters can help clinicians to avoid making errors in this complex arena. Yet clinicians are not usually lawyers and tend to have only limited legal knowledge. This gives rise to a risk of assuming more mastery than one may have.

Indeed, research suggests that a range of professionals, including psychotherapists, overestimate their capabilities and competencies, even in areas in which they have received substantial training (Creed, Wolk, Feinberg, Evans, & Beck, 2016; Lipsett, Harris, & Downing, 2011; Mathieson, Barnfield, & Beaumont, 2009; Walfish, McAlister, O’Donnell, & Lambert, 2012).

Saturday, November 10, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction

Maria Panagioti, Keith Geraghty, Judith Johnson
JAMA Intern Med. 2018;178(10):1317-1330.
doi:10.1001/jamainternmed.2018.3713

Abstract

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Study Selection  Quantitative observational studies.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

Thursday, November 8, 2018

Do We Need To Teach Ethics And Empathy To Data Scientists?

Kalev Leetaru
Forbes.com
Originally posted October 8, 2018

Here is an excerpt:

One of the most frightening aspects of the modern web is the speed at which it has struck down decades of legislation and professional norms regarding personal privacy and the ethics of turning ordinary citizens into laboratory rats to be experimented on against their wills. In the space of just two decades the online world has weaponized personalization and data brokering, stripped away the last vestiges of privacy, centralized control over the world’s information and communications channels, changed the public’s understanding of the right over their digital selves and profoundly reshaped how the scholarly world views research ethics, informed consent and the right to opt out of being turned into a digital guinea pig.

It is the latter which in many ways has driven each of the former changes. Academia’s changing views towards IRB and ethical review has produced a new generation of programmers and data scientists who view research ethics as merely an outdated obsolete historical relic that was an obnoxious barrier preventing them from doing as they pleased to an unsuspecting public.

(cut)

Ironically, however, when asked whether she would consent to someone mass harvesting all of her own personal information from all of the sites she has willingly signed up for over the years, the answer was a resounding no. When asked how she reconciled the difference between her view that users of platforms willingly relinquish their right to privacy, while her own data should be strictly protected, she was unable to articulate a reason other than that those who create and study the platforms are members of the “societal elite” who must be granted an absolute right to privacy, while “ordinary” people can be mined and manipulated at will. Such an empathy gap is common in the technical world, in which people’s lives are dehumanized into spreadsheets of numbers that remove any trace of connection or empathy.

The info is here.

Monday, November 5, 2018

Bolton says 'excessive' ethics checks discourage outsiders from joining government

Nicole Gaouette
CNN.com
Originally posted October 31, 2018

A day after CNN reported that the Justice Department is investigating whether Interior Secretary Ryan Zinke has broken the law by using his office to personally enrich himself, national security adviser John Bolton told the Hamilton Society in Washington that ethics rules make it hard for people outside of the government to serve.

Bolton said "things have gotten more bureaucratic, harder to get things done" since he served under President George H.W. Bush in the 1990s and blamed the difficulty, in part, on the "excessive nature of the so-called ethics checks."

"If you were designing a system to discourage people from coming into government, you would do it this way," Bolton said.

"That risks building up a priestly class" of government employees, he added.

"It's really depressing to see," Bolton said of the bureaucratic red tape.

The info is here.

My take: Mr. Bolton is wrong.  We need rigorous ethical guidelines, transparency, enforceability, and thorough background checks.  Otherwise, the swamp will grow much greater than it already is.

Saturday, November 3, 2018

Just deserts

A Conversation Between Dan Dennett and Gregg Caruso
aeon.co
Originally published October 4, 2018

Here is an excerpt:

There are additional concerns as well. As I argue in my Public Health and Safety (2017), the social determinants of criminal behaviour are broadly similar to the social determinants of health. In that work, and elsewhere, I advocate adopting a broad public-health approach for identifying and taking action on these shared social determinants. I focus on how social inequities and systemic injustices affect health outcomes and criminal behaviour, how poverty affects brain development, how offenders often have pre-existing medical conditions (especially mental-health issues), how homelessness and education affects health and safety outcomes, how environmental health is important to both public health and safety, how involvement in the criminal justice system itself can lead to or worsen health and cognitive problems, and how a public-health approach can be successfully applied within the criminal justice system. I argue that, just as it is important to identify and take action on the social determinants of health if we want to improve health outcomes, it is equally important to identify and address the social determinants of criminal behaviour. My fear is that the system of desert you want to preserve leads us to myopically focus on individual responsibility and ultimately prevents us from addressing the systemic causes of criminal behaviour.

Consider, for example, the crazed reaction to [the then US president Barack] Obama’s claim that, ‘if you’ve got a [successful] business, you didn’t build that’ alone. The Republicans were so incensed by this claim that they dedicated the second day of the 2012 Republican National Convention to the theme ‘We Built it!’ Obama’s point, though, was simple, innocuous, and factually correct. To quote him directly: ‘If you’ve been successful, you didn’t get there on your own.’ So, what’s so threatening about this? The answer, I believe, lies in the notion of just deserts. The system of desert keeps alive the belief that if you end up in poverty or prison, this is ‘just’ because you deserve it. Likewise, if you end up succeeding in life, you and you alone are responsible for that success. This way of thinking keeps us locked in the system of blame and shame, and prevents us from addressing the systemic causes of poverty, wealth-inequality, racism, sexism, educational inequity and the like. My suggestion is that we move beyond this, and acknowledge that the lottery of life is not always fair, that luck does not average out in the long run, and that who we are and what we do is ultimately the result of factors beyond our control.

The info is here.

I clipped out the more social-psychological aspect of the conversation.  There is a much broader, philosophical component regarding free will earlier in the conversation.

Wednesday, October 3, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction: A Systematic Review and Meta-analysis.

Maria Panagioti, PhD; Keith Geraghty, PhD; Judith Johnson, PhD; et al
JAMA Intern Med. Published online September 4, 2018.
doi:10.1001/jamainternmed.2018.3713

Abstract

Importance  Physician burnout has taken the form of an epidemic that may affect core domains of health care delivery, including patient safety, quality of care, and patient satisfaction. However, this evidence has not been systematically quantified.

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

The research is here.