Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 9, 2017

Are practitioners becoming more ethical?

By Rebecca Clay
The Monitor on Psychology
May 2017, Vol 48, No. 5
Print version: page 50

The results of research presented at APA's 2016 Annual Convention suggest that today's practitioners are less likely to commit such ethical violations as kissing a client, altering diagnoses to meet insurance criteria and treating homosexuality as pathological than their counterparts 30 years ago.

The research, conducted by psychologists Rebecca Schwartz-Mette, PhD, of the University of Maine at Orono and David S. Shen-Miller, PhD, of Bastyr University, replicated a 1987 study by Kenneth Pope, PhD, and colleagues published in the American Psychologist. Schwartz-Mette and Shen-Miller asked 453 practicing psychologists the same 83 questions posed to practitioners three decades ago.

The items included clear ethical violations, such as having sex with a client or supervisee. But they also included behaviors that could reasonably be construed as ethical, such as breaking confidentiality to report child abuse; behaviors that are ambiguous or not specifically prohibited, such as lending money to a client; and even some that don't seem controversial, such as shaking hands with a client. "Interestingly, 75 percent of the items from the Pope study were rated as less ethical in our study, suggesting a more general trend toward conservativism in multiple areas," says Schwartz-Mette.

The article is here.

Thursday, June 8, 2017

Shining Light on Conflicts of Interest

Craig Klugman
The American Journal of Bioethics 
Volume 17, 2017 - Issue 6

Chimonas, DeVito and Rothman (2017) offer a descriptive target article that examines physicians' knowledge of and reaction to the Sunshine Act's Open Payments Database. This program is a federal computer repository of all payments and goods with a worth over $10 made from pharmaceutical companies and device manufacturers to physicians. Created under the 2010 Affordable Care Act, the goal of this database is to make the relationships between physicians and the medical drug/device industry more transparent. Such transparency is often touted as a solution to financial conflicts of interest (COI). A COI occurs when a person owes featly to more than one party. For example, physicians have fiduciary duties toward patients. At the same time, when physicians receive gifts or benefits from a pharmaceutical company, they are more likely to prescribe that company's products (Spurling et al. 2010). The gift creates a sense of a moral obligation toward the company. These two interests can be (but may not be) in conflict. Such arrangements can undermine a patient's trust with his/her physician, and more broadly, the public's trust of medicine.

(cut)

The idea is that if people are told about the conflict, then they can judge for themselves whether the provider is compromised and whether they wish to receive care from this person. The database exists with this intent—that transparency alone is enough. What is a patient to do with this information? Should patients avoid physicians who have conflicts? The decision is left in the patient's hands. Back in 2014, the Pharmaceutical Research and Manufacturers of America lobbying group expressed concern that the public would not understand the context of any payments or gifts to physicians (Castellani 2014).

The article is here.

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Backchannel.com
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Wednesday, June 7, 2017

What do White House Rules Mean if They Can Be Circumvented?

Sheelah Kolhatkar
The New Yorker
Originally posted June 6, 2017

Here is an excerpt:

Each Administration establishes its own ethics rules, often by executive order, which go beyond ethics laws codified by Congress (those laws require such things as financial-disclosure forms from government employees, the divestiture of assets if they pose conflicts, and recusal from government matters if they intersect with personal business). While the rules established by law are hard and fast, officials can be granted waivers from the looser executive-order rules. The Obama Administration granted a handful of such waivers over the course of its eight years. What’s startling with the Trump White House is just how many waivers have been issued so early in Trump’s term—more than a dozen were disclosed last week, with another twenty-four expected this week, according to a report in the Wall Street Journal—as well as the Administration’s attempt to keep them secret, all while seeming to flout the laws that dictate how the whole system should work.

The ethics waivers made public last week apply to numerous officials who are now working on matters affecting the same companies and industries they represented before joining the Administration. The documents were only released after the Office of Government Ethics pressed the Trump Administration to make them public, which is how they have been handled in the past; the White House initially refused, attempting to argue that the ethics office lacked the standing to even ask for them. After a struggle, the Administration relented, but many of the waivers it released were missing critical information, such as the dates when they were issued. One waiver in particular, which appears to apply to Trump’s chief strategist, Stephen Bannon, without specifically naming him, grants Administration staff permission to communicate with news organizations where they might have formerly worked (Breitbart News, in Bannon’s case). The Bannon-oriented waiver, issued by the “Counsel to the President,” contains the line “I am issuing this memorandum retroactive to January 20, 2017.”

Walter Shaub, the head of the Office of Government Ethics, quickly responded that there is no such thing as a “retroactive” ethics waiver. Shaub told the Times, “If you need a retroactive waiver, you have violated a rule.”

The article is here.

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Research and clinical issues in trauma and dissociation: Ethical and logical fallacies, myths, misreports, and misrepresentations

Jenny Ann Rydberg
European Journal of Trauma & Dissociation
Available online 23 April 2017

Introduction

The creation of a new journal on trauma and dissociation is an opportunity to take stock of existing models and theories in order to distinguish mythical, and sometimes dangerous, stories from established facts.

Objective

To describe the professional, scientific, clinical, and ethical strategies and fallacies that must be envisaged when considering reports, claims, and recommendations relevant to trauma and dissociation.

Method

After a general overview, two current debates in the field, the stabilisation controversy and the false/recovered memory controversy, are examined in detail to illustrate such issues.

Results

Misrepresentations, misreports, ethical and logical fallacies are frequent in the general and scientific literature regarding the stabilisation and false/recovered memory controversies.

Conclusion

A call is made for researchers and clinicians to strengthen their knowledge of and ability to identify such cognitive, logical, and ethical manoeuvres both in scientific literature and general media reports.

The article is here.

Monday, June 5, 2017

AI May Hold the Key to Stopping Suicide

Bahar Gholipour
NBC News
Originally posted May 23, 2017

Here is an excerpt:

So far the results are promising. Using AI, Ribeiro and her colleagues were able to predict whether someone would attempt suicide within the next two years at about 80 percent accuracy, and within the next week at 92 percent accuracy. Their findings were recently reported in the journal Clinical Psychological Science.

This high level of accuracy was possible because of machine learning, as researchers trained an algorithm by feeding it anonymous health records from 3,200 people who had attempted suicide. The algorithm learns patterns through examining combinations of factors that lead to suicide, from medication use to the number of ER visits over many years. Bizarre factors may pop up as related to suicide, such as acetaminophen use a year prior to an attempt, but that doesn't mean taking acetaminophen can be isolated as a risk factor for suicide.

"As humans, we want to understand what to look for," Ribeiro says. "But this is like asking what's the most important brush stroke in a painting."

With funding from the Department of Defense, Ribeiro aims to create a tool that can be used in clinics and emergency rooms to better find and help high-risk individuals.

The article is here.

Can Psychologists Tell Us Anything About Morality?

John M. Doris, Edouard Machery and Stephen Stich
Philosopher's Magazine
Originally published May 10, 2017

Here is an excerpt:

Some psychologists accept morally dubious employment. Some psychologists cheat. Some psychology experiments don't replicate. Some. But the inference from some to all is at best invalid, and at worst, invective. There's good psychology and bad psychology, just like there's good and bad everything else, and tarring the entire discipline with the broadest of brushes won’t help us sort that out. It is no more illuminating to disregard the work of psychologists en masse on the grounds that a tiny minority of the American Psychological Association, a very large and diverse professional association, were involved with the Bush administration’s program of torture than it would to disregard the writings of all Nietzsche scholars because some Nazis were Nietzsche enthusiasts! To be sure, there are serious questions about which intellectual disciplines, and which intellectuals, are accorded cultural capital, and why. But we are unlikely to find serious answers by means of innuendo and polemic.

Could there be more substantive reasons to exclude scientific psychology from the study of ethics? The most serious – if ultimately unsuccessful – objection proceeds in the language of “normativity”. For philosophers, normative statements are prescriptive, or “oughty”: in contrast to descriptive statements, which aspire only to say how the world is, normative statements say what ought be done about it. And, some have argued, never the twain shall meet.

While philosophers haven’t enjoyed enviable success in adducing lawlike generalisations, one such achievement is Hume’s Law (we told you the issues are old ones), which prohibits deriving normative statements from descriptive statements. As the slogan goes, “is doesn’t imply ought.”

Many philosophers, ourselves included, suppose that Hume is on to something. There probably exists some sort of “inferential barrier” between the is and the ought, such that there are no strict logical entailments from the descriptive to the normative.

The article is here.