Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, May 24, 2019

Immutable morality: Even God could not change some moral facts

Madeline Reinecke & Zachary Horne
PsyArXiv
Last edited December 24, 2018

Abstract

The idea that morality depends on God is a widely held belief. This belief entails that the moral “facts” could be otherwise because, in principle, God could change them. Yet, some moral propositions seem so obviously true (e.g., the immorality of killing someone just for pleasure) that it is hard to imagine how they could be otherwise. In two experiments, we investigated people’s intuitions about the immutability of moral facts. Participants judged whether it was even possible, or possible for God, to change moral, logical, and physical facts. In both experiments, people judged that altering some moral facts was impossible—not even God could turn morally wrong acts into morally right acts. Strikingly, people thought that God could make physically impossible and logically impossible events occur. These results demonstrate the strength of people’s metaethical commitments and shed light on the nature of morality and its centrality to thinking and reasoning.

The research is here.

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Wednesday, May 22, 2019

Healthcare portraiture and unconscious bias

Karthik Sivashanker, Kathryn Rexrode, and others
BMJ 2019;365:l1668
Published April 12, 2019
https://doi.org/10.1136/bmj.l1668

Here is an excerpt:

Conveying the right message

In this regard, healthcare organisations have opportunities to instil a feeling of belonging and comfort for all their employees and patients. A simple but critical step is to examine the effect that their use of all imagery, as exemplified by portraits, has on their constituents. Are these portraits sufficiently conveying a message of social justice and equity? Do they highlight the achievement (as with a picture of a petri dish), or the person (a picture of Alexander Fleming without sufficient acknowledgment of his contributions)? Further still, do these images reveal the values of the organisation or its biases?

At our institution in Boston there was no question that the leaders depicted had made meaningful contributions to our hospital and healthcare. After soliciting feedback through listening sessions, open forums, and inbox feedback from our art committee, employees, clinicians, and students, however, our institution agreed to hang these portraits in their respective departments. This decision aimed to balance a commitment to equity with an intent to honourably display these portraits, which have inspired generations of physicians and scientists to be their best. It also led our social justice and equity committee to tackle problems like unconscious bias and diversity in hiring. In doing so, we are acknowledging the close interplay of symbolism and policy making in perpetuating racial and sex inequities, and the importance of tackling both together.

The info is here.

Why Behavioral Scientists Need to Think Harder About the Future

Ed Brandon
www.behavioralscientist.org
Originally published January 17, 2019

Here is an excerpt:

It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.

As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.

The info is here.

Tuesday, May 21, 2019

Bergen County psychologist charged with repeated sexual assaults of a child

Joe Brandt
www.nj.com
Originally posted April 18, 2019

A psychologist whose business works with children was charged Wednesday with multiple sexual assaults of a child under 13 years old.

Lorenzo Puertas, 78, faces two counts of sexual assault and one count of endangering the welfare of a child, Bergen County Prosecutor Dennis Calo announced Thursday.

Puertas, of Franklin Lakes, served as executive director of Psych-Ed Services, which has offices in Franklin Lakes and in Lakewood. The health provider officers bilingual psychological services including pre-employment psych screenings and child study team evaluations.

The info is here.

Moral Disengagement in the Corporate World

Jenny White M.Sc. M.P.H., Albert Bandura Ph.D. & Lisa A. Bero Ph.D.
(2009) Accountability in Research, 16:1, 41-74,
DOI: 10.1080/08989620802689847

Abstract

We analyze mechanisms of moral disengagement used to eliminate moral consequences by industries whose products or production practices are harmful to human health. Moral disengagement removes the restraint of self-censure from harmful practices. Moral self-sanctions can be selectively disengaged from harmful activities by investing them with socially worthy purposes, sanitizing and exonerating them, displacing and diffusing responsibility, minimizing or disputing harmful consequences, making advantageous comparisons, and disparaging and blaming critics and victims. Internal industry documents and public statements related to the research activities of these industries were coded for modes of moral disengagement by the tobacco, lead, vinyl chloride (VC), and silicosis-producing industries. All but one of the modes of moral disengagement were used by each of these industries. We present possible safeguards designed to protect the integrity of research.

A copy of the research is here.

Monday, May 20, 2019

How Drug Companies Helped Shape a Shifting Biological View of Mental Ilness

Terry Gross
NPR Health Shots
Originally posted May 2, 2019

Here are two excerpts:

On why the antidepressant market is now at a standstill

The huge developments that happen in the story of depression and the antidepressants happens in the late '90s, when a range of different studies increasingly seemed to suggest that these antidepressants — although they're helping a lot of people — when compared to placebo versions of themselves, don't seem to do much better. And that is not because they are not helping people, but because the placebos are also helping people. Simply thinking you're taking Prozac, I guess, can have a powerful effect on your state of depression. In order, though, for a drug to get on the market, it's got to beat the placebo. If it can't beat the placebo, the drug fails.

(cut)

On why pharmaceutical companies are leaving the psychiatric field

Because there have been no new good ideas as to where to look for new, novel biomarkers or targets since the 1960s. The only possible exception is there is now some excitement about ketamine, which targets a different set of biochemical systems. But R&D is very expensive. These drugs are now, mostly, off-patent. ... [The pharmaceutical companies'] efforts to bring on new drugs in that sort of tried-and-true and tested way — with a tinker here and a tinker there — has been running up against mostly unexplained but indubitable problems with the placebo effect.

The info is here.