Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, August 2, 2016

Competencies for Psychologists in the Domains of Religion and Spirituality

C. Vieten, S. Scammell, A. Pierce, R. Pilato, I Ammondson, K, I. Pargament, & D. Lukoff
Spirituality in Clinical Practice, Vol 3(2), Jun 2016, 92-114.

Abstract

Religion and spirituality are important aspects of human diversity that should receive adequate attention in cultural competence training for psychologists. Furthermore, spiritual and religious beliefs and practices are relevant to psychological and emotional well-being, and clinicians who are trained to sensitively address these domains in their clinical practice should be more effective. Our research team previously published a set of 16 religious and spiritual competencies based on a combination of focus group and survey research with the intent that they could be used to guide training. In the present study, we conducted a survey to determine whether these competencies would be acceptable to a broader population of practicing clinicians. Results indicate a large degree of support for the proposed competencies. Between 73.0 and 94.1% of respondents agreed that psychologists should receive training and demonstrate competence in each of the 16 areas. The majority (52.2%–80.7%) indicated that they had received little or no training, and between 29.7% and 58.6% had received no training at all, in these competencies. We conclude with recommendations for integrating these religious and spiritual competencies more fully into clinical training and practice.

The article is here.

Moral Motivation

Rosati, Connie S.
The Stanford Encyclopedia of Philosophy (Fall 2016 Edition)

In our everyday lives, we confront a host of moral issues. Once we have deliberated and formed judgments about what is right or wrong, good or bad, these judgments tend to have a marked hold on us. Although in the end, we do not always behave as we think we ought, our moral judgments typically motivate us, at least to some degree, to act in accordance with them. When philosophers talk about moral motivation, this is the basic phenomenon that they seek to understand. Moral motivation is an instance of a more general phenomenon—what we might call normative motivation—for our other normative judgments also typically have some motivating force. When we make the normative judgment that something is good for us, or that we have a reason to act in a particular way, or that a specific course of action is the rational course, we also tend to be moved. Many philosophers have regarded the motivating force of normative judgments as the key feature that marks them as normative, thereby distinguishing them from the many other judgments we make. In contrast to our normative judgments, our mathematical and empirical judgments, for example, seem to have no intrinsic connection to motivation and action. The belief that an antibiotic will cure a specific infection may move an individual to take the antibiotic, if she also believes that she has the infection, and if she either desires to be cured or judges that she ought to treat the infection for her own good. All on its own, however, an empirical belief like this one appears to carry with it no particular motivational impact; a person can judge that an antibiotic will most effectively cure a specific infection without being moved one way or another.

The entry is here.

Monday, August 1, 2016

A Review of Research on Moral Injury in Combat Veterans

Sheila Frankfurt and Patricia Frazier
Military Psychology
http://dx.doi.org/10.1037/mil0000132

Abstract


The moral injury construct has been proposed to describe the suffering some veterans experience when they engage in acts during combat that violate their beliefs about their own goodness or the goodness of the world. These experiences are labeled transgressive acts to identify them as potentially traumatic experiences distinct from the fear-based traumas associated with posttraumatic stress disorder. The goal of this article was to review empirical and clinical data relevant to transgressive acts and moral injury, to identify gaps in the literature, and to encourage future research and interventions. We reviewed literature on 3 broad arms of the moral injury model proposed by Litz and colleagues (2009): (a) the definition, prevalence, and potential correlates of transgressive acts (e.g., military training and leadership, combat exposure, and personality), (b) the relations between transgressive acts and the moral injury syndrome (e.g., self-handicapping, self-injury, demoralization), and (c) some of the proposed mechanisms of moral injury genesis (e.g., shame, guilt, social withdrawal, and self-condemnation). We conclude with recommendations for future research for veterans suffering with moral injury.


Combat can require individuals to violate their consciences repeatedly. For several decades, clinicians have noted the psychological impact on veterans of engaging in killing, committing atrocities, and violating the rules of engagement (Haley, 1974). Despite this clinical attention, most psychological research on veterans' war wounds has focused on post traumatic stress disorder (PTSD; American Psychiatric Association, 2013), a fear-based disorder that results from exposure to life-threatening events, rather than on the consequences of active participation in warfare.

The moral injury syndrome was proposed to describe the constellation of shame and guilt based disturbances that some combat veterans experience after engaging in wartime acts of commission (e.g., killing) or omission (e.g., failing to prevent atrocities; Litz et al., 2009). The moral injury syndrome was proposed to be constituted of the PTSD symptoms of intrusive memories, emotional numbing, and avoidance, along with collateral effects such as self-injury, demoralization, and self-handicapping (Litz et al., 2009).

The article is here.

Panel slams plan for human research rules

by David Malakoff
Science  08 Jul 2016:
Vol. 353, Issue 6295, pp. 106-107
DOI: 10.1126/science.353.6295.106

In a surprise development certain to fuel a long-running controversy, a prominent science advisory panel is calling on the U.S. government to abandon a nearly finished update to rules on protecting human research participants. It should wait for a new high-level commission, created by Congress and the president, to recommend improvements and then start over, the panel says.

Policy insiders say the recommendation, made 29 June by a committee of the National Academies of Sciences, Engineering, and Medicine that is examining ways to reduce the regulatory burden on academic scientists, is the political equivalent of a comic book hero trying to step in front of a speeding train in a bid to prevent a wreck.

It's not clear, however, whether the panel will succeed in stopping the regulatory express--or just get run over. Both the Obama administration, which has been pushing to complete the new rules this year, and lawmakers in Congress would need to back the halt--and so far they've been silent.

Still, many researchers and university groups are thrilled with the panel's recommendation, noting that they have repeatedly objected to some of the proposed rule changes as unworkable, but with little apparent impact.

The article is here.

Sunday, July 31, 2016

Neural mechanisms underlying the impact of daylong cognitive work on economic decisions

Bastien Blain, Guillaume Hollard, and Mathias Pessiglione
PNAS 2016 113 (25) 6967-6972

Abstract

The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.

Significance

In evolved species, resisting the temptation of immediate rewards is a critical ability for the achievement of long-term goals. This self-control ability was found to rely on the lateral prefrontal cortex (LPFC), which also is involved in executive control processes such as working memory or task switching. Here we show that self-control capacity can be altered in healthy humans at the time scale of a workday, by performing difficult executive control tasks. This fatigue effect manifested in choice impulsivity was linked to reduced excitability of the LPFC following its intensive utilization over the day. Our findings might have implications for designing management strategies that would prevent daylong cognitive work from biasing economic decisions.

The research is here.


Saturday, July 30, 2016

Sexual abuse by doctors sometimes goes unpunished

Associated Press
Originally published July 6, 2016

Sexual abuse by doctors against patients is surprisingly widespread, yet the fragmented medical oversight system shrouds offenders' actions in secrecy and allows many to continue to treat patients, an investigation by The Atlanta Journal-Constitution has found.

The AJC obtained and analyzed more than 100,000 disciplinary orders against doctors since 1999. Among those, the newspaper identified more than 3,100 doctors sanctioned after being accused of sexual misconduct. More than 2,400 of the doctors had violations involving patients. Of those, half still have active medical licenses today, the newspaper found.

These cases represent only a fraction of incidences in which doctors have been accused of sexually abusing patients. Many remain obscured, the newspaper said, because state regulators and hospitals sometimes handle sexual misconduct cases in secret. Also, some public records are so vaguely worded that patients would not be aware that a sexual offense occurred.

The article is here.

Friday, July 29, 2016

When Doctors Have Conflicts of Interest

By Mikkael A. Sekeres
The New York Times - Well Blog
Originally posted June 29, 2016

Here is an excerpt:

What if, instead, the drug for which she provided advice is already commercially available. How much is her likelihood of prescribing this medication – what we call a conflict of commitment – influenced by her having been given an honorarium by the manufacturer for her advice about this or another drug made by the same company?

We know already that doctors are influenced in their prescribing patterns even by tchotchkes like pens or free lunches. One recent study of almost 280,000 physicians who received over 63,000 payments, most of which were in the form of free meals worth under $20, showed that these doctors were more likely to prescribe the blood pressure, cholesterol or antidepressant medication promoted as part of that meal than other medications in the same class of drugs. Are these incentives really enough to encroach on our sworn obligation to do what’s best for our patients, irrespective of outside influences? Perhaps, and that’s the reason many hospitals ban them.

In both scenarios the doctor should, at the very least, have to disclose the conflict to patients, either on a website, where patients could easily view it, or by informing them directly, as my mother-in-law’s doctor did to her.

The article is here.

Doctors disagree about the ethics of treating friends and family

By Elisabeth Tracey
The Pulse
Originally published July 1, 2016

Here is an excerpt:

Gold says the guidelines are in place for good reason. One concern is that a physician may have inappropriate emotional investment in the care of a friend or family member.

"It may cloud your ability to make a good judgment, so you might treat them differently than you would treat a patient in your office," Gold says. "For example you might order extra tests for the family member that you wouldn't order for someone else."

Physicians may also avoid broaching uncomfortable topics with someone they know personally.

"Sometimes we're talking about sensitive issues," says Gold. "If someone has a sexually transmitted disease, it's very awkward with a family member to go into a lot of detail with them... even though with a patient you would have those discussions."

The article is here.

Thursday, July 28, 2016

Driverless Cars: Can There Be a Moral Algorithm?

By Daniel Callahan
The Hastings Center
Originally posted July 5, 2016

Here is an excerpt:

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

The article is here.