Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, May 24, 2016

Junk Science on Trial

Jordan Smith
The Intercept
Originally posted May 6 2016

Here is an excerpt:

Expert Infallibility?

The Supreme Court's opinion makes little sense if you consider it critically. Under the court's reasoning, a conviction could be overturned if, for example, an eyewitness to a crime later realized he was wrong about what he saw. But if an expert who testified that DNA evidence belonged to one person later realized that the DNA belonged to someone else, nothing could be done to remedy that error, even if it was responsible for a conviction.

In the wake of that opinion, and with Richards's case firmly in mind, lawyers from across the state asked for a change in law -- one that would make it clear that a conviction can be overturned when experts recant their prior testimony as a result of scientific or technological advances.

Known as a junk science statute, the Bill Richards Bill changed the state penal code to address problematic forensic practices in individual criminal cases. Faulty forensics have been implicated in nearly half of all DNA exonerations, according to the Innocence Project, and in roughly 23 percent of all wrongful convictions, according to the National Registry of Exonerations. California's bill, which passed with bipartisan support, is only the second such statute in the country (following one in Texas), and its passage propelled the Richards case back to the Supreme Court for further consideration.

The article is here.

Pentagon perpetuates stigma of mental health counseling, study says

Gregg Zoroya
USA Today
Originally published May 5, 2016

Even as troop suicides remain at record levels, the Pentagon has failed to persuade servicemembers to seek counseling without fears that they'll damage their careers, a stinging government review concludes.

Despite six major Pentagon or independent studies from 2007 through 2014 that urged action to end the persistent stigma linked to mental health counseling, little has changed, analysts said in the April report by the Government Accountability Office.

The article is here.

Monday, May 23, 2016

Our research was key to the 10,000-hour rule, but here’s what got oversimplified

Anders Ericsson and Robert Pool
Salon.com
Originally posted April 16, 2016

Here is an excerpt:

Research has shown this to be true in field after field. It generally takes about ten years of intense study to become a chess grandmaster. Authors and poets have usually been writing for more than a decade before they produce their best work, and it is generally a decade or more between a scientist’s first publication and his or her most important publication — and this is in addition to the years of study before that first published research. A study of musical composers by the psychologist John R. Hayes found that it takes an average of twenty years from the time a person starts studying music until he or she composes a truly excellent piece of music, and it is generally never less than ten years. Gladwell’s ten-thousand-hour rule captures this fundamental truth — that in many areas of human endeavor it takes many, many years of practice to become one of the best in the world — in a forceful, memorable way, and that’s a good thing.

On the other hand, emphasizing what it takes to become one of the best in the world in such competitive fields as music, chess, or academic research leads us to overlook what we believe to be the more important lesson from the study of the violin students. When someone says that it takes ten thousand — or however many — hours to become really good at something, it puts the focus on the daunting nature of the task. While some may take this as a challenge — as if to say, “All I have to do is spend ten thousand hours working on this, and I’ll be one of the best in the world!”—many will see it as a stop sign: “Why should I even try if it’s going to take me ten thousand hours to get really good?” As Dogbert observed in one “Dilbert” comic strip, “I would think a willingness to practice the same thing for ten thousand hours is a mental disorder.”

The article is here.

Why this lab-grown human embryo has reignited an old ethical debate

By Patrick Monahan
Science
May. 4, 2016

It’s easy to obey a rule when you don’t have the means to break it. For decades, many countries have permitted human embryos to be studied in the laboratory only up to 14 days after their creation by in vitro fertilization. But—as far as anyone knows—no researcher has ever come close to the limit. The point of implantation, when the embryo attaches to the uterus about 7 days after fertilization, has been an almost insurmountable barrier for researchers culturing human embryos.

Now, two teams report growing human embryos about a week past that point. Beyond opening a new window on human biology, such work could help explain early miscarriages caused by implantation gone awry. As a result, some scientists and bioethicists contend that it’s time to revisit the so-called 14-day rule. But that won’t be welcomed by those who consider the rule to have a firm moral grounding—or by those who oppose any research on human embryos.

The article is here.

Sunday, May 22, 2016

Is Deontology a Moral Confabulation?

Emilian Mihailov
Neuroethics
April 2016, Volume 9, Issue 1, pp 1-13

Abstract

Joshua Greene has put forward the bold empirical hypothesis that deontology is a confabulation of moral emotions. Deontological philosophy does not stem from "true" moral reasoning, but from emotional reactions, backed up by post hoc rationalizations which play no role in generating the initial moral beliefs. In this paper, I will argue against the confabulation hypothesis. First, I will highlight several points in Greene’s discussion of confabulation, and identify two possible models. Then, I will argue that the evidence does not illustrate the relevant model of deontological confabulation. In fact, I will make the case that deontology is unlikely to be a confabulation because alarm-like emotions, which allegedly drive deontological theorizing, are resistant to be subject to confabulation. I will end by clarifying what kind of claims can the confabulation data support. The upshot of the final section is that confabulation data cannot be used to undermine deontological theory in itself, and ironically, if one commits to the claim that a deontological justification is a confabulation in a particular case, then the data suggests that in general deontology has a prima facie validity.

The article is here.

Saturday, May 21, 2016

Ghosting on Freud: why breaking up with a therapist is so tricky

Alana Massey
The Guardian
Originally posted May 2, 2016

Here is an excerpt:

Carole Lieberman, a psychiatrist in California, said that patients need to take on some responsibility in letting therapists know when things aren’t working out. “Patients need to come for at least one more session when they are thinking of breaking up with their therapist. Oftentimes, the therapist can resolve a misunderstanding that occurred, or help them to understand why it’s important for them to delve into their past. Even if the patient still decides to leave, they will do so with more insight into themselves and with an open door to return.”

But this expectation demands a great deal, too. Is it really the job of the patient to offer tips and tricks on how the therapist can improve their approach, particularly if the patient is already in a vulnerable or wounded state? Therapists who expect everyone to be experts at the therapeutic process are going to miss or dismiss the patients who need therapy the most.

The article is here.

Friday, May 20, 2016

Making it moral: Merely labeling an attitude as moral increases its strength

Andrew Luttrella, Richard E. Pettya, Pablo Briñolb, & Benjamin C. Wagner
Journal of Experimental Social Psychology
Available online 27 April 2016

Abstract

Prior research has shown that self-reported moral bases of people's attitudes predict a range of important consequences, including attitude-relevant behavior and resistance in the face of social influence. Although previous studies typically rely on self-report measures of such bases, the present research tests the possibility that people can be induced to view their own attitudes as grounded in moral bases. This perception alone leads to outcomes associated with strong attitudes. In three experiments, participants were led to view their attitudes as grounded in moral or non-moral bases. Merely perceiving a moral (vs. non-moral) basis to one's attitudes led them to show greater correspondence with relevant behavioral intentions (Experiment 1) and become less susceptible to change following a persuasive message (Experiments 2 and 3). Moreover, these effects were independent of any other established indicators of attitude strength.

Highlights

  • Mere perceptions of moral (vs. non-moral) attitude bases were manipulated.
  • Perceiving a moral basis increased attitude–intention consistency.
  • Perceiving a moral basis also led to greater resistance to persuasion.
  • These effects were not mediated by other established attitude strength indicators.

The article is here.

Sleep Deprivation and Advice Taking

Jan Alexander Häusser, Johannes Leder, Charlene Ketturat, Martin Dresler & Nadira Sophie Faber
Scientific Reports 6, Article number: 24386 (2016)
doi:10.1038/srep24386

Abstract

Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants.

The article is here.

Thursday, May 19, 2016

Anticipating artificial intelligence

Editorial Board
Nature
Originally posted April 26, 2016

Here is an excerpt:

So, what are the risks? Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours. This extreme scenario, which cannot be discounted, is what captures most popular attention. But it is misleading to dismiss all concerns as worried about this.

There are more immediate risks, even with narrow aspects of AI that can already perform some tasks better than humans can. Few foresaw that the Internet and other technologies would open the way for mass, and often indiscriminate, surveillance by intelligence and law-enforcement agencies, threatening principles of privacy and the right to dissent. AI could make such surveillance more widespread and more powerful.

Then there are cybersecurity threats to smart cities, infrastructure and industries that become overdependent on AI — and the all too clear threat that drones and other autonomous offensive weapons systems will allow machines to make lethal decisions alone.

The article is here.