Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 10, 2017

How Gullible Are We? A Review of the Evidence From Psychology and Social Science.

Hugo Mercier
Review of General Psychology, May 18 , 2017

Abstract

A long tradition of scholarship, from ancient Greece to Marxism or some contemporary social psychology, portrays humans as strongly gullible—wont to accept harmful messages by being unduly deferent. However, if humans are reasonably well adapted, they should not be strongly gullible: they should be vigilant toward communicated information. Evidence from experimental psychology reveals that humans are equipped with well-functioning mechanisms of epistemic vigilance. They check the plausibility of messages against their background beliefs, calibrate their trust as a function of the source’s competence and benevolence, and critically evaluate arguments offered to them. Even if humans are equipped with well-functioning mechanisms of epistemic vigilance, an adaptive lag might render them gullible in the face of new challenges, from clever marketing to omnipresent propaganda. I review evidence from different cultural domains often taken as proof of strong gullibility: religion, demagoguery, propaganda, political campaigns, advertising, erroneous medical beliefs, and rumors. Converging evidence reveals that communication is much less influential than often believed—that religious proselytizing, propaganda, advertising, and so forth are generally not very effective at changing people’s minds. Beliefs that lead to costly behavior are even less likely to be accepted. Finally, it is also argued that most cases of acceptance of misguided communicated information do not stem from undue deference, but from a fit between the communicated information and the audience’s preexisting beliefs.

The article is here.

Friday, June 9, 2017

Sapolsky on the biology of human evil

Sean Illing
Vox.com
Originally posted May 23, 2017

Here is an excerpt:

The key question of the book — why are we the way we are? — is explored from a multitude of angles, and the narrative structure helps guide the reader. For instance, Sapolsky begins by examining a person’s behavior in the moment (why we recoil or rejoice or respond aggressively to immediate stimuli) and then zooms backward in time, following the chain of antecedent causes back to our evolutionary roots.

For every action, Sapolsky shows, there are several layers of causal significance: There’s a neurobiological cause and a hormonal cause and a chemical cause and a genetic cause, and, of course, there are always environmental and historical factors. He synthesizes the research across these disciplines into a coherent, readable whole.

In this interview, I talk with Sapolsky about the paradoxes of human nature, why we’re capable of both good and evil, whether free will exists, and why symbols have become so central to human life.

The article and interview are here.

Are practitioners becoming more ethical?

By Rebecca Clay
The Monitor on Psychology
May 2017, Vol 48, No. 5
Print version: page 50

The results of research presented at APA's 2016 Annual Convention suggest that today's practitioners are less likely to commit such ethical violations as kissing a client, altering diagnoses to meet insurance criteria and treating homosexuality as pathological than their counterparts 30 years ago.

The research, conducted by psychologists Rebecca Schwartz-Mette, PhD, of the University of Maine at Orono and David S. Shen-Miller, PhD, of Bastyr University, replicated a 1987 study by Kenneth Pope, PhD, and colleagues published in the American Psychologist. Schwartz-Mette and Shen-Miller asked 453 practicing psychologists the same 83 questions posed to practitioners three decades ago.

The items included clear ethical violations, such as having sex with a client or supervisee. But they also included behaviors that could reasonably be construed as ethical, such as breaking confidentiality to report child abuse; behaviors that are ambiguous or not specifically prohibited, such as lending money to a client; and even some that don't seem controversial, such as shaking hands with a client. "Interestingly, 75 percent of the items from the Pope study were rated as less ethical in our study, suggesting a more general trend toward conservativism in multiple areas," says Schwartz-Mette.

The article is here.

Thursday, June 8, 2017

Shining Light on Conflicts of Interest

Craig Klugman
The American Journal of Bioethics 
Volume 17, 2017 - Issue 6

Chimonas, DeVito and Rothman (2017) offer a descriptive target article that examines physicians' knowledge of and reaction to the Sunshine Act's Open Payments Database. This program is a federal computer repository of all payments and goods with a worth over $10 made from pharmaceutical companies and device manufacturers to physicians. Created under the 2010 Affordable Care Act, the goal of this database is to make the relationships between physicians and the medical drug/device industry more transparent. Such transparency is often touted as a solution to financial conflicts of interest (COI). A COI occurs when a person owes featly to more than one party. For example, physicians have fiduciary duties toward patients. At the same time, when physicians receive gifts or benefits from a pharmaceutical company, they are more likely to prescribe that company's products (Spurling et al. 2010). The gift creates a sense of a moral obligation toward the company. These two interests can be (but may not be) in conflict. Such arrangements can undermine a patient's trust with his/her physician, and more broadly, the public's trust of medicine.

(cut)

The idea is that if people are told about the conflict, then they can judge for themselves whether the provider is compromised and whether they wish to receive care from this person. The database exists with this intent—that transparency alone is enough. What is a patient to do with this information? Should patients avoid physicians who have conflicts? The decision is left in the patient's hands. Back in 2014, the Pharmaceutical Research and Manufacturers of America lobbying group expressed concern that the public would not understand the context of any payments or gifts to physicians (Castellani 2014).

The article is here.

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Backchannel.com
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Wednesday, June 7, 2017

What do White House Rules Mean if They Can Be Circumvented?

Sheelah Kolhatkar
The New Yorker
Originally posted June 6, 2017

Here is an excerpt:

Each Administration establishes its own ethics rules, often by executive order, which go beyond ethics laws codified by Congress (those laws require such things as financial-disclosure forms from government employees, the divestiture of assets if they pose conflicts, and recusal from government matters if they intersect with personal business). While the rules established by law are hard and fast, officials can be granted waivers from the looser executive-order rules. The Obama Administration granted a handful of such waivers over the course of its eight years. What’s startling with the Trump White House is just how many waivers have been issued so early in Trump’s term—more than a dozen were disclosed last week, with another twenty-four expected this week, according to a report in the Wall Street Journal—as well as the Administration’s attempt to keep them secret, all while seeming to flout the laws that dictate how the whole system should work.

The ethics waivers made public last week apply to numerous officials who are now working on matters affecting the same companies and industries they represented before joining the Administration. The documents were only released after the Office of Government Ethics pressed the Trump Administration to make them public, which is how they have been handled in the past; the White House initially refused, attempting to argue that the ethics office lacked the standing to even ask for them. After a struggle, the Administration relented, but many of the waivers it released were missing critical information, such as the dates when they were issued. One waiver in particular, which appears to apply to Trump’s chief strategist, Stephen Bannon, without specifically naming him, grants Administration staff permission to communicate with news organizations where they might have formerly worked (Breitbart News, in Bannon’s case). The Bannon-oriented waiver, issued by the “Counsel to the President,” contains the line “I am issuing this memorandum retroactive to January 20, 2017.”

Walter Shaub, the head of the Office of Government Ethics, quickly responded that there is no such thing as a “retroactive” ethics waiver. Shaub told the Times, “If you need a retroactive waiver, you have violated a rule.”

The article is here.

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Research and clinical issues in trauma and dissociation: Ethical and logical fallacies, myths, misreports, and misrepresentations

Jenny Ann Rydberg
European Journal of Trauma & Dissociation
Available online 23 April 2017

Introduction

The creation of a new journal on trauma and dissociation is an opportunity to take stock of existing models and theories in order to distinguish mythical, and sometimes dangerous, stories from established facts.

Objective

To describe the professional, scientific, clinical, and ethical strategies and fallacies that must be envisaged when considering reports, claims, and recommendations relevant to trauma and dissociation.

Method

After a general overview, two current debates in the field, the stabilisation controversy and the false/recovered memory controversy, are examined in detail to illustrate such issues.

Results

Misrepresentations, misreports, ethical and logical fallacies are frequent in the general and scientific literature regarding the stabilisation and false/recovered memory controversies.

Conclusion

A call is made for researchers and clinicians to strengthen their knowledge of and ability to identify such cognitive, logical, and ethical manoeuvres both in scientific literature and general media reports.

The article is here.