Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 31, 2015

The What and Why of Self-Deception

Zoƫ Chance and Michael I. Norton
Current Opinion in Psychology
Available online 3 August 2015

Scholars from many disciplines have investigated self-deception, but both defining self-deception and establishing its possible benefits have been a matter of heated debate – a debate impoverished by a relative lack of empirical research. Drawing on recent research, we first classify three distinct definitions of self-deception, ranging from a view that self-deception is synonymous with positive illusions to a more stringent view that self-deception requires the presence of simultaneous conflicting beliefs. We then review recent research on the possible benefits of self-deception, identifying three adaptive functions: deceiving others, social status, and psychological benefits. We suggest potential directions for future research.

The nature and definition of self-deception remains open to debate. Philosophers have questioned whether – and how – self-deception is possible; evolutionary theorists have conjectured that self-deception may – or must – be adaptive. Until recently, there was little evidence for either the existence or processes of self-deception; indeed, Robert Trivers wrote that research on self-deception is still in its infancy. In recent years, however, empirical research on self-deception has been gaining traction in social psychology and economics, providing much-needed evidence and shedding light on the psychology of self-deception. We first classify competing definitions of self-deception, then review recent research supporting three distinct advantages of self-deception: improved success in deceiving others, social status, and psychological benefits.

The entire article is here.

Note to Psychologists: Psychologists engage in self-deception in psychotherapy.  Psychologists typically judge psychotherapy sessions as having been more beneficial than patients.  Self-deception may lead to clinical missteps and errors in judgment, both clinical and ethical.

The Moral Code

By Nayef Al-Rodhan
Foreign Affairs
Originally published August 12, 2015

Here is an excerpt:

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will be deployed in more complex situations that require spontaneous choices. The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The entire article is here.

Sunday, August 30, 2015

Inside the Monkey Lab: The Ethics of Testing on Animals

By Miriam Wells
Vice News
July 7, 2015

"Of course it's pitiful for the monkeys. Everyone feels the same — you see it and you don't want it. But the point is if you want something different then you have to make something different. It doesn't happen overnight."

Speaking to VICE News, Jeffrey Bajramovic, a scientist from the Biomedical Primate Research Centre (BPRC) in Holland, was refreshingly honest. What happens to the monkeys tested on inside the center — a not for profit laboratory which is the largest facility of its kind in Europe, housing around 1,500 primates — is horrible. Those sent for experimentation suffer pain and distress, sometimes severe, in studies that sometimes last for months, before ending their lives on an autopsy table.

But the tests they undertake contribute to the understanding of and development of vaccines and treatments for some of the world's most deadly and prevalent diseases. And in a grim paradox, as Bajramovic pointed out, the captive primates are also contributing to the development of alternative research methods that scientists can use so that ultimately, they don't have to test on animals at all.

It's a messy and emotional ethical dilemma that VICE News came face to face with when we gained rare access to the BPRC to see just what happens inside.

The entire article is here.

WARNING: There is a graphic and disturbing (to me) video embedded within the article.

Saturday, August 29, 2015

Weird Minds Might Destabilize Human Ethics

By Eric Schwitzgebel
The Splintered Mind Blog
Originally published August 13, 2015

Here is an excerpt:

For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?

The entire blog post is here.

Friday, August 28, 2015

Ethical Blind Spots: Explaining Unintentional Unethical Behavior

Sezer, O., F. Gino, and M. H. Bazerman. "Ethical Blind Spots: Explaining Unintentional Unethical Behavior." Current Opinion in Psychology (forthcoming).

Abstract

People view themselves as more ethical, fair, and objective than others, yet often act against their moral compass. This paper reviews recent research on unintentional unethical behavior and provides an overview of the conditions under which ethical blind spots lead good people to cross ethical boundaries. First, we present the psychological processes that cause individuals to behave unethically without their own awareness. Next, we examine the conditions that lead people to fail to accurately assess others' unethical behavior. We argue that future research needs to move beyond a descriptive framework and focus on finding empirically testable strategies to mitigate unethical behavior.

The article can be found here.


Deconstructing intent to reconstruct morality

Fiery Cushman
Current Opinion in Psychology
Volume 6, December 2015, Pages 97–103

Highlights

• Mental state inference is a foundational element of moral judgment.
• Its influence is usually captured by contrasting intentional and accidental harm.
• The folk theory of intentional action comprises many distinct elements.
• Moral judgment shows nuanced sensitivity to these constituent elements.
• Future research will profit from attention to the constituents of intentional action.

Mental state representations are a crucial input to human moral judgment. This fact is often summarized by saying that we restrict moral condemnation to ‘intentional’ harms. This simple description is the beginning of a theory, however, not the end of one. There is rich internal structure to the folk concept of intentional action, which comprises a series of causal relations between mental states, actions and states of affairs in the world. Moral judgment shows nuanced patterns of sensitivity to all three of these elements: mental states (like beliefs and desires), the actions that a person performs, and the consequences of those actions. Deconstructing intentional action into its elemental fragments will enable future theories to reconstruct our understanding of moral judgment.

The entire article is here.

Thursday, August 27, 2015

Steven Pinker is right about biotech and wrong about bioethics

Bill Gardner
The Incidental Economist
Originally published August 7, 2015

Here is an excerpt:

First, even by newspaper op-ed standards this is lazily argued. Pinker attributes a host of opinions to bioethicists without quoting any bioethicist. He does not cite any cases to document that bioethicists’ concerns about long term consequences have impeded research and caused harms. There likely are such cases, but he writes as if they are common. I served for years on the University of Pittsburgh IRB. For better or worse, the long term risks of biomedical research were never even discussed.

Worse, Pinker brackets “dignity” and “social justice”* in sneer quotes, as if it were self-evident that affronts to these values do not fall into the class of “identifiable harms” and as if these concerns can be dismissed without any actual argument. The only normative framework that has weight, by his lights, are the mortality and morbidity of disease. Of course mortality and morbidity are exceptionally important. But if that is the only framework that matters to Pinker he is in a very small minority.

The entire critique is here.

The Psychology of Whistleblowing

James Dungan, Adam Waytz, Liane Young
Current Opinion in Psychology
doi:10.1016/j.copsyc.2015.07.005

Abstract

Whistleblowing—reporting another person's unethical behavior to a third party—represents an ethical quandary. In some cases whistleblowing appears heroic whereas in other cases it appears reprehensible. This article describes how the decision to blow the whistle rests on the tradeoff that people make between fairness and loyalty. When fairness increases in value, whistleblowing is more likely whereas when loyalty increases in value, whistleblowing is less likely. Furthermore, we describe systematic personal, situational, and cultural factors stemming from the fairness-loyalty tradeoff that drive whistleblowing. Finally, we describe how minimizing this tradeoff and prioritizing constructive dissent can encourage whistleblowing and strengthen collectives.

The entire article is here.

Wednesday, August 26, 2015

Dreading My Patient

By Simon Yisreal Feuerman
The New York Times - Opinionator
Originally published August 25, 2015

I didn’t want him to show up.

He was a bright, handsome and winning patient. His first three sessions had been perfectly ordinary. And yet a few minutes before his fourth session, I found myself ardently wishing for him not to come.

This feeling was puzzling. It had overtaken me suddenly.

My patient was in his late 20s and had decided to enter therapy, as he explained in his first session, because he did not have enough confidence. He talked about not being able to think for himself and make his own decisions, not being able to hold his own at work or find his way when he was around women. He found that he stammered a lot and said the “wrong” things.

The entire article is here.