Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, July 31, 2016

Neural mechanisms underlying the impact of daylong cognitive work on economic decisions

Bastien Blain, Guillaume Hollard, and Mathias Pessiglione
PNAS 2016 113 (25) 6967-6972

Abstract

The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.

Significance

In evolved species, resisting the temptation of immediate rewards is a critical ability for the achievement of long-term goals. This self-control ability was found to rely on the lateral prefrontal cortex (LPFC), which also is involved in executive control processes such as working memory or task switching. Here we show that self-control capacity can be altered in healthy humans at the time scale of a workday, by performing difficult executive control tasks. This fatigue effect manifested in choice impulsivity was linked to reduced excitability of the LPFC following its intensive utilization over the day. Our findings might have implications for designing management strategies that would prevent daylong cognitive work from biasing economic decisions.

The research is here.


Saturday, July 30, 2016

Sexual abuse by doctors sometimes goes unpunished

Associated Press
Originally published July 6, 2016

Sexual abuse by doctors against patients is surprisingly widespread, yet the fragmented medical oversight system shrouds offenders' actions in secrecy and allows many to continue to treat patients, an investigation by The Atlanta Journal-Constitution has found.

The AJC obtained and analyzed more than 100,000 disciplinary orders against doctors since 1999. Among those, the newspaper identified more than 3,100 doctors sanctioned after being accused of sexual misconduct. More than 2,400 of the doctors had violations involving patients. Of those, half still have active medical licenses today, the newspaper found.

These cases represent only a fraction of incidences in which doctors have been accused of sexually abusing patients. Many remain obscured, the newspaper said, because state regulators and hospitals sometimes handle sexual misconduct cases in secret. Also, some public records are so vaguely worded that patients would not be aware that a sexual offense occurred.

The article is here.

Friday, July 29, 2016

When Doctors Have Conflicts of Interest

By Mikkael A. Sekeres
The New York Times - Well Blog
Originally posted June 29, 2016

Here is an excerpt:

What if, instead, the drug for which she provided advice is already commercially available. How much is her likelihood of prescribing this medication – what we call a conflict of commitment – influenced by her having been given an honorarium by the manufacturer for her advice about this or another drug made by the same company?

We know already that doctors are influenced in their prescribing patterns even by tchotchkes like pens or free lunches. One recent study of almost 280,000 physicians who received over 63,000 payments, most of which were in the form of free meals worth under $20, showed that these doctors were more likely to prescribe the blood pressure, cholesterol or antidepressant medication promoted as part of that meal than other medications in the same class of drugs. Are these incentives really enough to encroach on our sworn obligation to do what’s best for our patients, irrespective of outside influences? Perhaps, and that’s the reason many hospitals ban them.

In both scenarios the doctor should, at the very least, have to disclose the conflict to patients, either on a website, where patients could easily view it, or by informing them directly, as my mother-in-law’s doctor did to her.

The article is here.

Doctors disagree about the ethics of treating friends and family

By Elisabeth Tracey
The Pulse
Originally published July 1, 2016

Here is an excerpt:

Gold says the guidelines are in place for good reason. One concern is that a physician may have inappropriate emotional investment in the care of a friend or family member.

"It may cloud your ability to make a good judgment, so you might treat them differently than you would treat a patient in your office," Gold says. "For example you might order extra tests for the family member that you wouldn't order for someone else."

Physicians may also avoid broaching uncomfortable topics with someone they know personally.

"Sometimes we're talking about sensitive issues," says Gold. "If someone has a sexually transmitted disease, it's very awkward with a family member to go into a lot of detail with them... even though with a patient you would have those discussions."

The article is here.

Thursday, July 28, 2016

Driverless Cars: Can There Be a Moral Algorithm?

By Daniel Callahan
The Hastings Center
Originally posted July 5, 2016

Here is an excerpt:

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

The article is here.

We live in a culture of mental health haves and have nots

Naomi Freundlich
KevinMD.com
Originally published July 4, 2016

Here is an excerpt:

Let’s start with enforcement. Multiple agencies oversee compliance with the parity laws, including state insurance boards, Medicaid, HHS or the Department of Labor, depending on how and where an individual is insured. Figuring out who to contact when there’s been a violation of parity laws can be difficult, especially when people are experiencing mental health problems.

Furthermore, although obvious discrepancies between behavioral and medical coverage are not all that common, according to Kaiser Health News, many insurers have figured out how to limit mental health costs through more subtle strategies that are harder to track. These include frequent and rigorous utilization review and so-called “fail first” therapies that require providers to try the least expensive therapies first even if they might not be the most effective. The KHN authors note, “Among the more murky areas is ‘medical necessity’ review — in which insurers decide whether a patient requires a certain treatment and at what frequency.”

A survey conducted by the National Alliance on Mental Illness found that patients were twice as likely to be denied mental health care (29 percent) based on “medical necessity” review than other medical care (14 percent).

The article is here.

Wednesday, July 27, 2016

Research fraud: the temptation to lie – and the challenges of regulation

Ian Freckelton
The Conversation
Originally published July 5, 2016

Most scientists and medical researchers behave ethically. However, in recent years, the number of high-profile scandals in which researchers have been exposed as having falsified their data raises the issue of how we should deal with research fraud.

There is little scholarship on this subject that crosses disciplines and engages with the broader phenomenon of unethical behaviour within the domain of research.

This is partly because disciplines tend to operate in their silos and because universities, in which researchers are often employed, tend to minimise adverse publicity.

When scandals erupt, embarrassment in a particular field is experienced for a short while – and researchers may leave their university. But few articles are published in scholarly journals about how the research fraud was perpetrated; how it went unnoticed for a significant period of time; and how prevalent the issue is.

The article is here.

Doctors have become less empathetic, but is it their fault?

By David Scales
Aeon Magazine
Originally posted July 4, 2016

Here is an excerpt:

The key resides in the nature of clinical empathy, which requires that the practitioner be truly present. That medical professional must be curious enough to cognitively and emotionally relate to a patient’s situation, perspective and feelings, and then communicate this understanding back to the patient.

At times, empathy’s impact seems more magical than biological. When empathy scores are higher, patients recover faster from the common cold, diabetics have better blood-sugar control, people adhere more closely to treatment regimens, and patients feel more enabled to tackle their illnesses. Empathetic physicians report higher personal wellbeing and are sued less often.

If the case for empathy is clear, the way to boost it remains murky indeed. New research shows that meditation and ‘mindful communication’ can increase a physician’s empathy, spawning a niche industry of training courses. Yet this preoccupation has missed the glaring deficits in the work environment, which squelch the human empathy that doctors possess.

The article is here.

Tuesday, July 26, 2016

The Paradox of Disclosure

By Sunita Sah
The New York Times
Originally published July 8, 2016

Here is an excerpt:

To some extent, they do work. Disclosing a conflict of interest — for example, a financial adviser’s commission or a physician’s referral fee for enrolling patients into clinical trials — often reduces trust in the advice.

But my research has found that people are still more likely to follow this advice because the disclosure creates increased pressure to follow the adviser’s recommendation. It turns out that people don’t want to signal distrust to their adviser or insinuate that the adviser is biased, and they also feel pressure to help satisfy their adviser’s self-interest. Instead of functioning as a warning, disclosure can become a burden on advisees, increasing pressure to take advice they now trust less.

The article is here.