Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, May 26, 2019

Brain science should be making prisons better, not trying to prove innocence

Arielle Baskin-Sommers
theconversaton.com
Originally posted November 1, 2017

Here is an excerpt:

Unfortunately, when neuroscientific assessments are presented to the court, they can sway juries, regardless of their relevance. Using these techniques to produce expert evidence doesn’t bring the court any closer to truth or justice. And with a single brain scan costing thousands of dollars, plus expert interpretation and testimony, it’s an expensive tool out of reach for many defendants. Rather than helping untangle legal responsibility, neuroscience here causes an even deeper divide between the rich and the poor, based on pseudoscience.

While I remain skeptical about the use of neuroscience in the judicial process, there are a number of places where its findings could help corrections systems develop policies and practices based on evidence.

Solitary confinement harms more than helps

Take, for instance, the use within prisons of solitary confinement as a punishment for disciplinary infractions. In 2015, the Bureau of Justice reported that nearly 20 percent of federal and state prisoners and 18 percent of local jail inmates spent time in solitary.

Research consistently demonstrates that time spent in solitary increases the chances of persistent emotional trauma and distress. Solitary can lead to hallucinations, fantasies and paranoia; it can increase anxiety, depression and apathy as well as difficulties in thinking, concentrating, remembering, paying attention and controlling impulses. People placed in solitary are more likely to engage in self-mutilation as well as exhibit chronic rage, anger and irritability. The term “isolation syndrome” has even been coined to capture the severe and long-lasting effects of solitary.

The info is here.

Saturday, May 25, 2019

Lost-in-the-mall: False memory or false defense?

Ruth A. Blizard & Morgan Shaw (2019)
Journal of Child Custody
DOI: 10.1080/15379418.2019.1590285

Abstract

False Memory Syndrome (FMS) and Parental Alienation Syndrome (PAS) were developed as defenses for parents accused of child abuse as part of a larger movement to undermine prosecution of child abuse. The lost-in-the-mall study by Dr. Elizabeth Loftus concludes that an entire false memory can be implanted by suggestion. It has since been used to discredit abuse survivors’ testimony by inferring that false memories for childhood abuse can be implanted by psychotherapists. Examination of the research methods and findings of the study shows that no full false memories were actually formed. Similarly, PAS, coined by Richard Gardner, is frequently used in custody cases to discredit children’s testimony by alleging that the protective parent coached them to have false memories of abuse. There is no scientific research demonstrating the existence of PAS, and, in fact, studies on the suggestibility of children show that they cannot easily be persuaded to provide detailed disclosures of abuse.

The info is here.

Friday, May 24, 2019

Immutable morality: Even God could not change some moral facts

Madeline Reinecke & Zachary Horne
PsyArXiv
Last edited December 24, 2018

Abstract

The idea that morality depends on God is a widely held belief. This belief entails that the moral “facts” could be otherwise because, in principle, God could change them. Yet, some moral propositions seem so obviously true (e.g., the immorality of killing someone just for pleasure) that it is hard to imagine how they could be otherwise. In two experiments, we investigated people’s intuitions about the immutability of moral facts. Participants judged whether it was even possible, or possible for God, to change moral, logical, and physical facts. In both experiments, people judged that altering some moral facts was impossible—not even God could turn morally wrong acts into morally right acts. Strikingly, people thought that God could make physically impossible and logically impossible events occur. These results demonstrate the strength of people’s metaethical commitments and shed light on the nature of morality and its centrality to thinking and reasoning.

The research is here.

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Wednesday, May 22, 2019

Healthcare portraiture and unconscious bias

Karthik Sivashanker, Kathryn Rexrode, and others
BMJ 2019;365:l1668
Published April 12, 2019
https://doi.org/10.1136/bmj.l1668

Here is an excerpt:

Conveying the right message

In this regard, healthcare organisations have opportunities to instil a feeling of belonging and comfort for all their employees and patients. A simple but critical step is to examine the effect that their use of all imagery, as exemplified by portraits, has on their constituents. Are these portraits sufficiently conveying a message of social justice and equity? Do they highlight the achievement (as with a picture of a petri dish), or the person (a picture of Alexander Fleming without sufficient acknowledgment of his contributions)? Further still, do these images reveal the values of the organisation or its biases?

At our institution in Boston there was no question that the leaders depicted had made meaningful contributions to our hospital and healthcare. After soliciting feedback through listening sessions, open forums, and inbox feedback from our art committee, employees, clinicians, and students, however, our institution agreed to hang these portraits in their respective departments. This decision aimed to balance a commitment to equity with an intent to honourably display these portraits, which have inspired generations of physicians and scientists to be their best. It also led our social justice and equity committee to tackle problems like unconscious bias and diversity in hiring. In doing so, we are acknowledging the close interplay of symbolism and policy making in perpetuating racial and sex inequities, and the importance of tackling both together.

The info is here.

Why Behavioral Scientists Need to Think Harder About the Future

Ed Brandon
www.behavioralscientist.org
Originally published January 17, 2019

Here is an excerpt:

It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.

As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.

The info is here.

Tuesday, May 21, 2019

Bergen County psychologist charged with repeated sexual assaults of a child

Joe Brandt
www.nj.com
Originally posted April 18, 2019

A psychologist whose business works with children was charged Wednesday with multiple sexual assaults of a child under 13 years old.

Lorenzo Puertas, 78, faces two counts of sexual assault and one count of endangering the welfare of a child, Bergen County Prosecutor Dennis Calo announced Thursday.

Puertas, of Franklin Lakes, served as executive director of Psych-Ed Services, which has offices in Franklin Lakes and in Lakewood. The health provider officers bilingual psychological services including pre-employment psych screenings and child study team evaluations.

The info is here.