Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 22, 2018

James Comey isn’t qualified for his new gig teaching ethics, experts explain

Olivia Goldhill
Quartz
Originally published January 27, 2018

Here is an excerpt:

“My entire professional life has been dedicated to ethics education. I’m disheartened by the fact that educational institutions hire people to teach ethics who really don’t have a background in ethics,” says Aine Donovan, director of the ethics institute at Dartmouth University.

Certainly, Comey’s own behavior as FBI director would make the basis of a strong case study, says Donovan. But Comey’s experience navigating a moral quandary is not sufficient qualification. “I’d rather have moral exemplars teaching an ethical leadership class than somebody who has even a whiff of controversy associated with them,” Donovan says. In addition, Donovan adds, it seems Comey did not make the right moral choice at every stage. For example, Comey leaked documents about his conversations with Trump. “I’m highly skeptical that that would ever pass ethical muster,” adds Donovan.

A “puzzling” choice

“There is much to be learned about [ethics from] studying Mr. Comey’s own conduct, but most of it is not positive,” Howard Prince II, who holds the Loyd Hackler Endowed Chair in Ethical Leadership at University of Texas-Austin, writes in an email. Overall, Comey is “a puzzling choice” to teach ethical leadership, he adds.

The article is here.

NIH adopts new rules on human research, worrying behavioral scientists

William Wan
The Washington Post
Originally posted January 24, 2018

Last year, the National Institutes of Health announced plans to tighten its rules for all research involving humans — including new requirements for scientists studying human behavior — and touched off a panic.

Some of the country’s biggest scientific associations, including the American Psychological Association and Federation of Associations in Behavioral and Brain Sciences, penned impassioned letters over the summer warning that the new policies could slow scientific progress, increase red tape and present obstacles for researchers working in smaller labs with less financial and administrative resources to deal with the added requirements. More than 3,500 scientists signed an open letter to NIH director Francis Collins.

The new rules are scheduled to take effect Thursday. They will have a big impact on how research is conducted, especially in fields like psychology and neuroscience. NIH distributes more than $32 billion each year, making it the largest public funder of biomedical and health research in the world, and the rules apply to any NIH-supported work that studies human subjects and is evaluating the effects of interventions on health or behavior.

The article is here.

Wednesday, February 21, 2018

The Federal Right to Try Act of 2017—A Wrong Turn for Access to Investigational Drugs and the Path Forward

Alison Bateman-House and Christopher T. Robertson
JAMA Intern Med. Published online January 22, 2018.

In 2017, President Trump said that “one thing that’s always disturbed”1 him is that the US Food and Drug Administration (FDA) denies access to experimental drugs even “for a patient who’s terminal…[who] is not going to live more than four weeks [anyway.]”  Fueled by emotionally charged anecdotes recirculated by libertarian political activists, 38 states have passed Right to Try laws. In 2017, the US Senate approved a bill that would create a national law (Box). As of December 2017, the US House of Representatives was considering the bill.

The article is here.

Don’t look to the president for moral leadership

Julia Azari
vox.com
Originally posted February 19, 2018

President Trump’s reaction to last week’s school shooting in Parkland, Florida, has drawn heavy criticism.

His initial round of tweets, reminding the country that the Florida shooter had been known to display “bad and erratic behavior,” and that such behavior should be “reported to the authorities” were not well-received. Critics called the response “victim-blaming.” Survivors of the shooting were neither comforted nor inspired.

Of course, we live in a time of partisan polarization, and it’s easy to suggest that there are many Americans who are unlikely to respond positively to any message from President Trump. That’s probably true. But none other than liberal snowflake Ari Fleischer — press secretary to George W. Bush — offered a broader indictment: “Some of the biggest errors Pres. Trump has made are what he did NOT say. He did not immediately condemn the KKK after Charlottesville. He did not immediately condemn domestic violence or offer sympathy for Rob Porter’s ex-wives. He should speak today about the school shooting.” Trump did address the incident in a speech on Thursday.

(cut)

Anti-Trump Republican Rick Wilson tweeted on Sunday that Trump isn’t a president but a “moral stress test.” His speech on Thursday and his visit to Florida over the weekend appeared to impress very few people. At the time of this writing, the president’s response appears to have culminated in a series of tweets chastising the FBI for not pursuing reports about the Florida shooter and linking the FBI’s failure to its Russia investigation.

The article is here.

Tuesday, February 20, 2018

This Cat Sensed Death. What if Computers Could, Too?

Siddhartha Mukherjee
The New York Times
Originally published January 3, 2017

Here are two excerpts:

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

(cut)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

The article is here.

Death and the Self

Shaun Nichols, Nina Strohminger, Arun Rai, Jay Garfield
Cognitive Science (2018) 1–19

Abstract

It is an old philosophical idea that if the future self is literally different from the current self,
one should be less concerned with the death of the future self (Parfit, 1984). This paper examines
the relation between attitudes about death and the self among Hindus, Westerners, and three Buddhist
populations (Lay Tibetan, Lay Bhutanese, and monastic Tibetans). Compared with other
groups, monastic Tibetans gave particularly strong denials of the continuity of self, across several
measures. We predicted that the denial of self would be associated with a lower fear of death and
greater generosity toward others. To our surprise, we found the opposite. Monastic Tibetan Buddhists
showed significantly greater fear of death than any other group. The monastics were also
less generous than any other group about the prospect of giving up a slightly longer life in order
to extend the life of another.

The article is here.

Monday, February 19, 2018

Culture and Moral Distress: What’s the Connection and Why Does It Matter?

Nancy Berlinger and Annalise Berlinger
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 608-616.

Abstract

Culture is learned behavior shared among members of a group and from generation to generation within that group. In health care work, references to “culture” may also function as code for ethical uncertainty or moral distress concerning patients, families, or populations. This paper analyzes how culture can be a factor in patient-care situations that produce moral distress. It discusses three common, problematic situations in which assumptions about culture may mask more complex problems concerning family dynamics, structural barriers to health care access, or implicit bias. We offer sets of practical recommendations to encourage learning, critical thinking, and professional reflection among students, clinicians, and clinical educators.

Here is an excerpt:

Clinicians’ shortcuts for identifying “problem” patients or “difficult” families might also reveal implicit biases concerning groups. Health care professionals should understand the difference between cultural understanding that helps them respond to patients’ needs and concerns and implicit bias expressed in “cultural” terms that can perpetuate stereotypes or obscure understanding. A way to identify biased thinking that may reflect institutional culture is to consider these questions about advocacy:

  1. Which patients or families does our system expect to advocate for themselves?
  2. Which patients or families would we perceive or characterize as “angry” or “demanding” if they attempted to advocate for themselves?
  3. Which patients or families do we choose to advocate for, and on what grounds?
  4. What is our basis for each of these judgments?

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

Here is an excerpt:

Rather than being a straightforward matter of doing the right thing, medical students’ understandings of morally correct behavior differ from one individual to another. This is partly because moral judgments frequently concern decisions about behaviors that might entail some form of harm to another, and different individuals hold different perspectives about moral trade-offs (i.e., how to decide between two courses of action when the consequences of both have morally undesirable effects). It is partly because the majority of human behavior arises within a person-situation interaction. Indeed, moral “flexibility” suggests that though we are motivated to do the right thing, any moral principle can bring forth a variety of context-dependent moral judgments and decisions. Moral rules and principles are abstract ideas—rather than facts—and these ideas need to be operationalized and applied to specific situations. Each situation will have different affordances highlighting one facet or another of any given moral value. Thus, when faced with morally dubious situations—such as being asked to participate in lapses of patient consent by senior clinicians during workplace learning events—medical students’ subsequent actions (compliance or resistance) differ.

The article is here.

Sunday, February 18, 2018

Responsibility and Consciousness

Matt King and Peter Carruthers

1. Introduction

Intuitively, consciousness matters for responsibility. A lack of awareness generally provides the
basis for an excuse, or at least for blameworthiness to be mitigated. If you are aware that what
you are doing will unjustifiably harm someone, it seems you are more blameworthy for doing so
than if you harm them without awareness. There is thus a strong presumption that consciousness
is important for responsibility. The position we stake out below, however, is that consciousness,
while relevant to moral responsibility, isn’t necessary.

The background for our discussion is an emerging consensus in the cognitive sciences
that a significant portion, perhaps even a substantial majority, of our mental lives takes place
unconsciously. For example, routine and habitual actions are generally guided by the so-called
“dorsal stream” of the visual system, whose outputs are inaccessible to consciousness (Milner &
Goodale 1995; Goodale 2014). And there has been extensive investigation of the processes that
accompany conscious as opposed to unconscious forms of experience (Dehaene 2014). While
there is room for disagreement at the margins, there is little doubt that our actions are much more
influenced by unconscious factors than might intuitively seem to be the case. At a minimum,
therefore, theories of responsibility that ignore the role of unconscious factors supported by the
empirical data proceed at their own peril (King & Carruthers 2012). The crucial area of inquiry
for those interested in the relationship between consciousness and responsibility concerns the
relative strength of that relationship and the extent to which it should be impacted by findings in
the empirical sciences.

The paper is here.