Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 11, 2016

Facebook has a new process for discussing ethics. But is it ethical?

Anna Lauren Hoffman
The Guardian
Originally posted Friday 17 June 2016

Here is an excerpt:

Tellingly, Facebook’s descriptions of procedure and process offer little insight into the values and ideals that drive its decision-making. Instead, the authors offer vague, hollow and at times conflicting statements such as noting how its reviewers “consider how the research will improve our society, our community, and Facebook”.

This seemingly innocuous statement raises more ethical questions than it answers. What does Facebook think an “improved” society looks like? Who or what constitutes “our community?” What values inform their ideas of a better society?

Facebook sidesteps this completely by saying that ethical oversight necessarily involves subjectivity and a degree of discretion on the part of reviewers – yet simply noting that subjectivity is unavoidable does not negate the fact that explicit discussion of ethical values is important.

The article is here.

Ethical Considerations Prompt New Telemedicine Rules

American Medical Association
Press Release
Originally released June 13, 2016

With the increasing use of telemedicine and telehealth technologies, delegates at the 2016 AMA Annual Meeting adopted new policy that outlines ethical ground rules for physicians using these technologies to treat patients.

The guidelines

The policy, based on a report from the AMA Council on Ethical and Judicial Affairs, notes that while physicians’ fundamental ethical responsibilities don’t change when providing telemedicine, new technology has given rise to the need for further guidance.

“Telehealth and telemedicine are another stage in the ongoing evolution of new models for the delivery of care and patient-physician interactions,” AMA Board Member Jack Resneck, MD, said in a news release. “The new AMA ethical guidance notes that while new technologies and new models of care will continue to emerge, physicians’ fundamental ethical responsibilities do not change.”

The pressor is here.

Sunday, July 10, 2016

Deontology Or Trustworthiness?

A Conversation Between Molly Crockett, Daniel Kahneman
Edge.org
June 16, 2016

Here is an excerpt:

DANIEL KAHNEMAN:  Molly, you started your career as a neuroscientist, and you still are. Yet, much of the work that you do now is about moral judgment. What journey got you there?            

MOLLY CROCKETT:  I've always been interested in how we make decisions. In particular, why is it that the same person will sometimes make a decision that follows one set of principles or rules, and other times make a wildly different decision? These intra-individual variations in decision making have always fascinated me, specifically in the moral domain, but also in other kinds of decision making, more broadly.

I got interested in brain chemistry because this seemed to be a neural implementation or solution for how a person could be so different in their disposition across time, because we know brain chemistry is sensitive to aspects of the environment. I picked that methodology as a tool with which to study why our decisions can shift so much, even within the same person; morality is one clear demonstration of how this happens.            

KAHNEMAN:  Are you already doing that research, connecting moral judgment to chemistry?

CROCKETT:  Yes. One of the first entry points into the moral psychology literature during my PhD was a study where we gave people different kinds of psychoactive drugs. We gave people an antidepressant drug that affected their serotonin, or an ADHD drug that affected their noradrenaline, and then we looked at how these drugs affected the way people made moral judgments. In that literature, you can compare two different schools of moral thought for how people ought to make moral decisions.

The entire transcript, video, and audio are here.

Saturday, July 9, 2016

Facebook Offers Tools for Those Who Fear a Friend May Be Suicidal

By Mike Isaac
The New York Times
June 14, 2016

Here is an excerpt:

With more than 1.65 billion members worldwide posting regularly about their behavior, Facebook is planning to take a more direct role in stopping suicide. On Tuesday, in the biggest step by a major technology company to incorporate suicide prevention tools into its platform, the social network introduced mechanisms and processes to make it easier for people to help friends who post messages about suicide or self-harm. With the new features, people can flag friends’ posts that they deem suicidal; the posts will be reviewed by a team at the social network that will then provide language to communicate with the person who is at risk, as well as information on suicide prevention.

The timing coincides with a surge in suicide rates in the United States to a 30-year high. The increase has been particularly steep among women and middle-aged Americans, reflecting widespread desperation. Last year, President Obama declared a World Suicide Prevention Day in September, calling on people to recognize mental health issues early and to reach out to support one another.

Friday, July 8, 2016

Could a device tell your brain to make healthy choices?

by Yasmin Anwar
Futurity
Originally posted June 13, 2016

New research suggests it’s possible to detect when our brain is making a decision and nudge it to make the healthier choice.

In recording moment-to-moment deliberations by macaque monkeys over which option is likely to yield the most fruit juice, scientists have captured the dynamics of decision-making down to millisecond changes in neurons in the brain’s orbitofrontal cortex.

The article is here.

Thursday, July 7, 2016

The Mistrust of Science

By Atul Gawande
The New Yorker
Originally posted June 10, 2016

Here are two excerpts:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

(cut)

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.

The article is here.

Secrets and lies: Faked data and lack of transparency plague global drug manufacturing

By Kelly Crowe
CBC News 
Originally posted: June 10, 2016

Here is an excerpt:

In another case, when the FDA responded to complaints from U.S. manufacturers about impurities in raw ingredients from a Chinese company and asked to see the data, inspectors discovered it had been deleted and the audit trail disabled.

Two companies on Health Canada's watch list have been caught falsifying the source of their active pharmaceutical ingredient. Both claimed to have made the raw material, but actually purchased it from somewhere else.

There's tragic proof that data integrity matters. In 2008, 19 people in the U.S. died and hundreds more were sickened by a contaminated blood thinner made from a raw material the FDA believes had been tampered with at its source in China.

The article is here.

Wednesday, July 6, 2016

Intrinsic honesty and the prevalence of rule violations across societies

Simon Gächter & Jonathan F. Schulz
Nature 531, 496–499 (24 March 2016)
doi:10.1038/nature17160

Abstract

Deception is common in nature and humans are no exception. Modern societies have created institutions to control cheating, but many situations remain where only intrinsic honesty keeps people from cheating and violating rules. Psychological, sociological and economic theories suggest causal pathways to explain how the prevalence of rule violations in people’s social environment, such as corruption, tax evasion or political fraud, can compromise individual intrinsic honesty. Here we present cross-societal experiments from 23 countries around the world that demonstrate a robust link between the prevalence of rule violations and intrinsic honesty. We developed an index of the ‘prevalence of rule violations’ (PRV) based on country-level data from the year 2003 of corruption, tax evasion and fraudulent politics. We measured intrinsic honesty in an anonymous die-rolling experiment5. We conducted the experiments with 2,568 young participants (students) who, due to their young age in 2003, could not have influenced PRV in 2003. We find individual intrinsic honesty is stronger in the subject pools of low PRV countries than those of high PRV countries. The details of lying patterns support psychological theories of honesty. The results are consistent with theories of the cultural co-evolution of institutions and values, and show that weak institutions and cultural legacies that generate rule violations not only have direct adverse economic consequences, but might also impair individual intrinsic honesty that is crucial for the smooth functioning of society.

The article is here.

Tuesday, July 5, 2016

How scientists fool themselves – and how they can stop

Regina Nuzzo
Nature 526, 182–185 (08 October 2015)
doi:10.1038/526182a

Here is an excerpt:

This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.

The article is here.

Editor's note: These biases also apply to clinicians who use research or their own theories about how and why psychotherapy works.