Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Wednesday, February 8, 2017

Medical culture encourages doctors to avoid admitting mistakes

By Lawrence Schlachter
STAT News
Originally published on January 13, 2017

Here are two excerpts:

In reality, the factor that most influences doctors to hide or disclose medical errors should be clear to anyone who has spent much time in the profession: The culture of medicine frowns on admitting mistakes, usually on the pretense of fear of malpractice lawsuits.

But what’s really at risk are doctors’ egos and the preservation of a system that lets physicians avoid accountability by ignoring problems or shifting blame to “the system” or any culprit other than themselves.

(cut)

What is a patient to do in this environment? The first thing is to be aware of your own predisposition to take everything your doctor says at face value. Listen closely and you may hear cause for more intense questioning.

You will likely never hear the terms negligence, error, mistake, or injury in a hospital. Instead, these harsh but truthful words and phrases are replaced with softer ones like accident, adverse event, or unfortunate outcome. If you hear any of these euphemisms, ask more questions or seek another opinion from a different doctor, preferably at a different facility.

Most doctors would never tell a flagrant lie. But in my experience as a neurosurgeon and as an attorney, too many of them resort to half-truths and glaring omissions when it comes to errors. Beware of passive language like “the patient experienced bleeding” rather than “I made a bad cut”; attributing an error to random chance or a nameless, faceless system; or trivialization of the consequences of the error by claiming something was “a blessing in disguise.”

The article is here.

Wednesday, December 23, 2015

Is It Safe For Medical Residents To Work 30-Hour Shifts?

By Rob Stein
NPR
Originally published December 7, 2015

Since 2003, strict rules have limited how long medical residents can work without a break. The rules are supposed to minimize the risk that these doctors-in-training will make mistakes that threaten patients' safety because of fatigue.

But are these rules really the best for new doctors and their patients? There's been intense debate over that and some say little data to resolve the question.

So a group of researchers decided to follow thousands of medical residents at dozens of hospitals around the country.

The study compares the current rules, which limit first-year residents to working no more than 16 hours without a break, with a more flexible schedule that could allow the young doctors to work up to 30 hours.

Researchers will examine whether more mistakes happen on one schedule or the other and whether the residents learn more one way or the other. The year-long study started in July.

The entire article is here.

Monday, November 16, 2015

Believing What You Don’t Believe

By Jane L. Risen and David Nussbaum
The New York Times - Gray Matter
Originally published October 30, 2015

Here is an excerpt:

But as one of us, Professor Risen, discusses in a paper just published in Psychological Review, many instances of superstition and magical thinking indicate that the slow system doesn’t always behave this way. When people pause to reflect on the fact that their superstitious intuitions are irrational, the slow system, which is supposed to fix things, very often doesn’t do so. People can simultaneously recognize that, rationally, their superstitious belief is impossible, but persist in their belief, and their behavior, regardless. Detecting an error does not necessarily lead people to correct it.

This cognitive quirk is particularly easy to identify in the context of superstition, but it isn’t restricted to it. If, for example, the manager of a baseball team calls for an ill-advised sacrifice bunt, it is easy to assume that he doesn’t know that the odds indicate his strategy is likely to cost his team runs. But the manager may have all the right information; he may just choose not to use it, based on his intuition in that specific situation.

The entire article is here.

Believing What We Do Not Believe: Acquiescence to Superstitious Beliefs and Other Powerful Intuitions

By Risen, Jane L.
Psychological Review, Oct 19 , 2015

Abstract

Traditionally, research on superstition and magical thinking has focused on people’s cognitive shortcomings, but superstitions are not limited to individuals with mental deficits. Even smart, educated, emotionally stable adults have superstitions that are not rational. Dual process models—such as the corrective model advocated by Kahneman and Frederick (2002, 2005), which suggests that System 1 generates intuitive answers that may or may not be corrected by System 2—are useful for illustrating why superstitious thinking is widespread, why particular beliefs arise, and why they are maintained even though they are not true. However, to understand why superstitious beliefs are maintained even when people know they are not true requires that the model be refined. It must allow for the possibility that people can recognize—in the moment—that their belief does not make sense, but act on it nevertheless. People can detect an error, but choose not to correct it, a process I refer to as acquiescence. The first part of the article will use a dual process model to understand the psychology underlying magical thinking, highlighting features of System 1 that generate magical intuitions and features of the person or situation that prompt System 2 to correct them. The second part of the article will suggest that we can improve the model by decoupling the detection of errors from their correction and recognizing acquiescence as a possible System 2 response. I suggest that refining the theory will prove useful for understanding phenomena outside of the context of magical thinking.

The article is here.

Wednesday, August 5, 2015

What would I eliminate if I had a magic wand? Overconfidence’

The psychologist and bestselling author of Thinking, Fast and Slow reveals his new research and talks about prejudice, fleeing the Nazis, and how to hold an effective meeting

By David Shariatmadari
The Guardian
Originally posted on July 18, 2015

Here is an excerpt:

What’s fascinating is that Kahneman’s work explicitly swims against the current of human thought. Not even he believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite. It is the bias he says he would most like to eliminate if he had a magic wand. But it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”.

The entire article is here.

Monday, March 9, 2015

Debate heats up over safety of electronic health records

Jayne O'Donnell and Laura Ungar
USAToday
Originally posted February 3, 2015

Department of Health and Human Services officials said Tuesday that the safety benefits of electronic health records far outweigh any potential problems, but critics say regulators are pushing health care providers to use them while downplaying the risks to patients.

"This transition to electronic health records has led to far better safety than (it has) created new problems," said Andy Gettinger, an physician who heads health information technology (HIT) safety at HHS, at a government-sponsored conference here.

The entire article is here.

Tuesday, January 13, 2015

The retraction war

By Jill Neimark
Aeon Magazine
Originally published December 23, 2014

Here is an excerpt:

Retraction was meant to be a corrective for any mistakes or occasional misconduct in science but it has, at times,  taken on a superhero persona instead. Like Superman, retraction can be too powerful, wiping out whole careers with a single blow. Yet it is also like Clark Kent, so mild it can be ignored while fraudsters continue publishing and receiving grants. The process is so wrought that just 5 per cent of scientific misconduct ever results in retraction, leaving an abundance of error in play to obfuscate the facts.

Scientists are increasingly aware of the amount of bad science out there – the word ‘reproducibility’ has become a kind of rallying cry for those who would reform science today. How can we ensure that studies are sound and can be reproduced by other scientists in separate labs?

The entire article is here.

Wednesday, April 16, 2014

Statistical Flaw Punctuates Brain Research in Elite Journals

By Gary Stix
Scientific American
Originally published March 27, 2014

Here is an excerpt:

That is the message of a new analysis in Nature Neuroscience that shows that more than half of 314 articles on neuroscience in elite journals   during an 18-month period failed to take adequate measures to ensure that statistically significant study results were not, in fact, erroneous. Consequently, at  least some of the results from papers in journals like Nature, Science, Nature Neuroscience and Cell were likely to be false positives, even after going through the arduous peer-review gauntlet.

The entire article is here.

Tuesday, December 10, 2013

Could a brain scan diagnose you as a psychopath?

A US neuroscientist claims he has found evidence of psychopathy in his own brain activity

By Chris Chamber
The Guardian
Originally published November 25, 2013

Here is an excerpt:

This isn’t the first time we’ve heard from Fallon. In addition to the fact that his claims haven't been published in peer-reviewed journals, here are three reasons why we should take what he says with a handful of salt.

One of the most obvious mistakes in Fallon’s reasoning is called the fallacy of reverse inference. His argument goes like this: areas of the brain called the ventromedial prefrontal cortex and orbitofrontal cortex are important for empathy and moral reasoning. At the same time, empathy and moral reasoning are lost or impaired in many psychopaths. So, people who show reduced activity in these regions must be psychopaths.

The flaw with this argument – as Fallon himself must know – is that there is no one-to-one mapping between activity in a given brain region and complex abilities such as empathy. There is no empathy region and there is no psychopath switch. If you think of the brain as a toolkit, these parts of the brain aren’t like hammers or screwdrivers that perform only one task. They’re more like Swiss army knives that have evolved to support a range of different abilities. And just as a Swiss army knife isn’t only a bottle opener, the ventromedial prefrontal cortex isn’t only associated with empathy and moral judgements. It is also involved in decision-making, sensitivity to reward, memory, and predicting the future.

The entire article is here.

Conspiracy theories: Why we believe the unbelievable

By Michael Shermer
The Los Angeles Times
Originally posted on November 26, 2013

Here is an excerpt:

Why do so many people refuse to accept this simple and obvious conclusion? The answer: psychology.

There are three psychological effects at work here, starting with "cognitive dissonance," or the discomfort felt when holding two ideas that are not in harmony. We attempt to reduce the dissonance by altering one of the ideas to be in accord with the other. In this case, the two discordant ideas are 1) JFK as one of the most powerful people on Earth who was 2) killed by Lee Harvey Oswald, a lone loser, a nobody. Camelot brought down by a curmudgeon.

That doesn't feel right. To balance the scale, conspiracy elements are stacked onto the Oswald side: the CIA, the FBI, the KGB, the Mafia, Fidel Castro, Lyndon Johnson and, in Oliver Stone's telling in his film "JFK," the military-industrial complex.

Cognitive dissonance was at work shortly after Princess Diana's death, which was the result of drunk driving, speeding and no seat belt. But princesses are not supposed to die the way thousands of regular people die each year, so the British royal family, the British intelligence services and others had to be fingered as co-conspirators.

The entire story is here.

Thursday, November 21, 2013

Talking with Patients about Other Clinicians' Errors

By Thomas H. Gallagher, Michelle M. Mello, and others
The New England Journal of Medicine
Originally published November 6, 2013

Here is an excerpt:

The rationales for disclosing harmful errors to patients are compelling and well described. Nonetheless, multiple barriers, including embarrassment, lack of confidence in one's disclosure skills, and mixed messages from institutions and malpractice insurers, make talking with patients about errors challenging. Several distinctive aspects of disclosing harmful errors involving colleagues intensify the difficulties.

One challenge is determining what happened when a clinician was not directly involved in the event in question. He or she may have little firsthand knowledge about the event, and relevant information in the medical record may be lacking. Beyond this, potential errors exist on a broad spectrum ranging from clinical decisions that are “not what I would have done” but are within the standard of care to blatant errors that might even suggest a problem of professional competence or proficiency.

The entire article is here.

Thanks to Gary Schoener for this information.

Saturday, November 9, 2013

Trouble at the lab

Scientists like to think of science as self-correcting. To an alarming degree, it is not

The Economist
Originally posted October 19, 2013

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

The entire article is here.

Thursday, October 31, 2013

The ethics of admitting you messed up

By Janet D. Stemwedel | October 14, 2013
The Scientific American Blog
@docfreeride

Here is an excerpt:

Ethically speaking, mistakes are a problem because they cause harm, or because they result from a lapse in an obligation we ought to be honoring, or both. Thus, an ethical response to messing up ought to involving addressing that harm and/or getting back on track with the obligation we fell down on. What does this look like?

1. Acknowledge the harm. This needs to be the very first thing you do. To admit you messed up, you have to recognize the mess, with no qualifications. There it is.

2. Acknowledge the experiential report of the people you have harmed. If you’re serious about sharing a world (which is what ethics is all about), you need to take seriously what the people with whom your sharing that world tell you about how they feel. They have privileged access to their own lived experiences; you need to rely on their testimony of those lived experiences.

The entire article is here.

Sunday, May 26, 2013

Owning Our Mistakes

By Nate Kreuter
Inside Higher Ed - Career Advice
Originally published May 15, 2013

Some of the columns that I write here at Inside Higher Ed arise from a really basic formula. It goes something like this: I make a mistake at work. I realize my error, or am compelled by another party to realize it, and I take corrective action. Then I write a column addressing the mistake in general terms, in hopes of perhaps removing a little of the trial and error from this whole higher education gig for a reader or two. Somewhat less frequently I simply observe the mistake of another and then write a column. I probably couldn’t keep up with this column without the steady stream of mistakes I make myself. Maybe my mistakes are job security of a strange sort.

I probably could even use this venue to make a public promise regarding my mistakes to my colleagues in my department, college, university, and across my discipline. Here goes: I promise you all that I’ll screw up again one day. I don’t know exactly how and I don’t know exactly when, but I promise to bungle something. Maybe just in a small way. Maybe in a big way. Who knows?

But here’s what I also promise: I promise to own up to whatever mistakes I make as soon as I recognize them, to do everything in my power to correct them, and to do my damnedest not to repeat them. This is, I think and I hope, what it means to be a good colleague. I certainly would not ask a colleague for more, but I also expect no less.

If to err is human, then 'fessing up is humane. Humane for ourselves and humane for our fellows.

The entire post is here.

Saturday, September 10, 2011

Team Decisions Better for the Weary

by Robert Preidt
MedicineNet.com

Teamwork can help tired people avoid making poor decisions, a new study indicates.

Pilots, doctors and others in demanding professions can make dangerous errors when they're weary. But, fatigued people who work as a team have better problem-solving skills than those who work alone, British researchers report.

They asked 171 army officer cadets, aged 18 to 24, at a weekend training exercise to solve a series of math problems. Some were tested before they began the training session and were rested, while others did the math problems at the end of the weekend when they were exhausted.

Individual cadets who were fatigued did far worse on the tests than those who were rested. However, teams of exhausted cadets did just as well as teams of rested cadets.

The study appears online in the Journal of Experimental Psychology: Applied.

"Teams appear to be more highly motivated to perform well, and team members can compare solutions to reach the best decision when they are fatigued. This appears to allow teams to avoid the inflexible thinking experienced by fatigued individuals," study author Daniel Frings, a senior lecturer in social psychology at London South Bank University, said in a journal news release.

In situations where fatigue is a concern, decisions should be made by teams rather than individuals if possible, the study concluded.

     +     +     +     +     +     +     +     +

This research supports the idea that group consultation can be very helpful for tired and overworked psychologists, especially when working with high risk or clinically challenging patients.