Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, June 23, 2020

Scathing COVID-19 book from Lancet editor — rushed but useful

Stephen Buranyi
nature.com
Originally posted 18 June 20

Here is an excerpt:

Horton levels the accusation that US President Donald Trump is committing a “crime against humanity” for defunding the very World Health Organization that is trying to help the United States and others. UK Prime Minister Boris Johnson, in Horton’s view, either lied or committed misconduct in telling the public that the government was well prepared for the pandemic. In fact, the UK government abandoned the world-standard advice to test, trace and isolate in March, with no explanation, then scrambled to ramp up testing in April, but repeatedly failed to meet its own targets, lagging weeks behind the rest of the world. A BBC investigation in April showed that the UK government failed to stockpile neccessary personal protective equipment for years before the crisis, and should have been aware that the National Health Service wasn’t adequately prepared.

Politicians are easy targets, though. Horton goes further, to suggest that although scientists in general have performed admirably, many of those advising the government directly contributed to what he calls “the greatest science policy failure for a generation”.

Again using the United Kingdom as an example, he suggests that researchers were insufficiently informed or understanding of the crisis unfolding in China, and were too insular to speak to Chinese scientists directly. The model for action at times seemed to be influenza, a drastic underestimation of the true threat of the new coronavirus. Worse, as the UK government’s response went off the rails in March, ostensibly independent scientists would “speak with one voice in support of government policy”, keeping up the facade that the country was doing well. In Horton’s view, this is a corruption of science policymaking at every level. Individuals failed in their responsibility to procure the best scientific advice, he contends; and the advisory regime was too close to — and in sync with — the political actors who were making decisions. “Advisors became the public relations wing of a government that had failed its people,” he concludes.

The text is here.

The Neuroscience of Moral Judgment: Empirical and Philosophical Developments

J. May, C. I. Workman, J. Haas, & H. Han
Forthcoming in Neuroscience and Philosophy,
eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press).

Abstract

We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of moral judgment in adulthood and to some extent in childhood and adolescence. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.

From the Conclusion

6.1 Reason vs. Emotion in Ethics

The dichotomy between reason and emotion stretches back to antiquity. But an improved understanding of the brain has, arguably more than psychological science, questioned the dichotomy (Huebner 2015; Woodward 2016). Brain areas associated with prototypical emotions, such as vmPFC and amygdala, are also necessary for complex learning and inference, even if largely automatic and unconscious. Even psychopaths, often painted as the archetype of emotionless moral monsters, have serious deficits in learning and inference. Moreover, even if our various moral judgments about trolley problems, harmless taboo violations, and the like are often automatic, they are nonetheless acquired through sophisticated learning mechanisms that are responsive to morally-relevant reasons (Railton 2017; Stanley et al. 2019). Indeed, normal moral judgment often involves gut feelings being attuned to relevant experience and made consistent with our web of moral beliefs (May & Kumar 2018).

The paper can be downloaded here.

Monday, June 22, 2020

5 Anti-Racist Practices White Scholars Can Adopt Today – #BLM Guest Post

Marius Kothor
TheProfessorIsIn.com
Originally posted 17 June, 2020

We are facing a historic moment of reckoning. The violent murder of George Floyd in Minneapolis ignited a movement that has engulfed the entire country. As people demand companies and organizations to account for their complicity in systemic racism, Black scholars are shedding new light on the anti-Blackness embedded within academic institutions. 

Black scholars such as Dr. Shardé M. Davis and Joy Melody Woods, for example, have started the #BlackintheIvory to bring renewed attention to the Micro and Macro level racism Black scholars experience in academia. A number of white scholars, on the other hand, are using this moment as an opportunity for hollow virtue signaling. Many have taken to social media to publicly declare that they are allies of Black people. It is unclear, however, if these performances of “woke-ness” will translate into efforts to address the systemic racism embedded in their departments and universities. From my experiences as a graduate student, it is unlikely that it will. Yet, for white scholars who are genuinely interested in using this moment to begin the process of unlearning the racist practices common in academia, there are a few practical steps that they can take. 

Below is a list of 5 things I think white scholars can do to begin to address racism in their day-to-day encounters with Black scholars. 
  1. Publicly Articulate Solidarity with Black Scholars
  2. Stop Calling the Black People in Your Institution by the Wrong Name
  3. Do Not Talk to Black People as if You Know their Realities Better than They do
  4. Cite Black Scholars in the Body of Your work, Not Just in the Footnotes 
  5. Don’t Try to Get Black Scholars to Validate Your Problematic Project 

Ethics of Artificial Intelligence and Robotics

Müller, Vincent C.
The Stanford Encyclopedia of Philosophy
(Summer 2020 Edition)

1. Introduction

1.1 Background of the Field

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, see under Other Internet Resources [hereafter OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

The entry is here.

Sunday, June 21, 2020

Downloading COVID-19 contact tracing apps is a moral obligation

G. Owen Schaefer and Angela Ballantyne
BMJ Blogs
Originally posted 4 May 20

Should you download an app that could notify you if you had been in contact with someone who contracted COVID-19? Such apps are already available in countries such as Israel, Singapore, and Australia, with other countries like the UK and US soon to follow. Here, we explain why you might have an ethical obligation to use a tracing app during the COVID-19 pandemic, even in the face of privacy concerns.

(cut)

Vulnerability and unequal distribution of risk

Marginalized populations are both hardest hit by pandemics and often have the greatest reason to be sceptical of supposedly benign State surveillance. COVID-19 is a jarring reminder of global inequality, structural racism, gender inequity, entrenched ableism, and many other social divisions. During the SARS outbreak, Toronto struggled to adequately respond to the distinctive vulnerabilities of people who were homeless. In America, people of colour are at greatest risk in several dimensions – less able to act on public health advice such as social distancing, more likely to contract the virus, and more likely to die from severe COVID if they do get infected. When public health advice switched to recommending (or in some cases requiring) masks, some African Americans argued it was unsafe for them to cover their faces in public. People of colour in the US are at increased risk of state surveillance and police violence, in part because they are perceived to be threatening and violent. In New York City, black and Latino patients are dying from COVID-19 at twice the rate of non-Hispanic white people.

Marginalized populations have historically been harmed by State health surveillance. For example, indigenous populations have been the victims of State data collection to inform and implement segregation, dispossession of land, forced migration, as well as removal and ‘re‐education’ of their children. Stigma and discrimination have impeded the public health response to HIV/AIDS, as many countries still have HIV-specific laws that prosecute people living with HIV for a range of offences.  Surveillance is an important tool for implementing these laws. Marginalized populations therefore have good reasons to be sceptical of health related surveillance.

Saturday, June 20, 2020

Forensic mental health expert testimony and judicial decision-making: A systematic literature review

R.M.S.van Es, M.J.J.Kunst, & J.W.de Keijser
Aggression and Violent Behavior
Volume 51, March–April 2020, 101387

Abstract

Forensic mental health expertise (FMHE) is an important source of information for decision-makers in the criminal justice system. This expertise can be used in various decisions in a criminal trial, such as criminal responsibility and sentencing decisions. Despite an increasing body of empirical literature concerning FMHE, it remains largely unknown how and to what extent this expertise affects judicial decisions. The aim of this review was therefore to provide insight in the relationship between FMHE and different judicial decisions by synthesizing published, quantitative empirical studies. Based on a systematic literature search using multiple online databases and selection criteria, a total of 27 studies are included in this review. The majority of studies were experiments conducted in the US among mock jurors. Most studies focused on criminal responsibility or sentencing decisions. Studies concerning criminal responsibility found consistent results in which psychotic defendants of serious, violent crimes were considered not guilty by reason of insanity more often than defendants with psychopathic disorders. Results for length and type of sanctions were less consistent and were often affected by perceived behavioral control, recidivism risk and treatability. Studies on possible prejudicial effects of FMHE are almost non-existent. Evaluation of findings, limitations and implications for future research and practice are discussed.

Highlights

• 27 studies examined effects of FMHE on judicial decisions on guilt and sentencing.

• Majority of studies from US with an experimental vignette design among mock jurors.

• FMHE on psychotic disorders led to more NGRI verdicts than psychopathic disorders.

• Effect of FMHE on sentencing is affected by disorder, behavioral control, treatability or recidivism risk.

• Research on prejudicial effects is almost non-existent.

 The info is here.

Friday, June 19, 2020

My Bedside Manner Got Worse During The Pandemic. Here's How I Improved

Shahdabul Faraz
npr.org
Health Shots
Originally published 16 May 20

Here is an excerpt:

These gestures can be as simple as sitting in a veteran's room for an extra five minutes to listen to World War II stories. Or listening with a young cancer patient to a song by our shared favorite band. Or clutching a sick patient's shoulder and reassuring him that he will see his three daughters again.

These gestures acknowledge a patient's humanity. It gives them some semblance of normalcy in an otherwise difficult period in their lives. Selfishly, that human connection also helps us — the doctors, nurses and other health care providers — deal with the often frustrating nature of our stressful jobs.

Since the start of the pandemic, our bedside interactions have had to be radically different. Against our instincts, and in order to protect our patients and colleagues, we tend to spend only the necessary amount of time in our patients' rooms. And once inside, we try to keep some distance. I have stopped holding my patients' hands. I now try to minimize small talk. No more whimsical conversational detours.

Our interactions now are more direct and short. I have, more than once, felt guilty for how quickly I've left a patient's room. This guilt is worsened, knowing that patients in hospitals don't have family and friends with them now either. Doctors are supposed to be there for our patients, but it's become harder than ever in recent months.

I understand why these changes are needed. As I move through several hospital floors, I could unwittingly transmit the virus if I'm infected and don't know it. I'm relatively young and healthy, so if I get the disease, I will likely recover. But what about my patients? Some have compromised immune systems. Most are elderly and have more than one high-risk medical condition. I could never forgive myself if I gave one of my patients COVID-19.

The info is here.

Better Minds, Better Morals A Procedural Guide to Better Judgment

Schaefer GO, Savulescu J.
J Posthum Stud. 2017;1(1):26‐43.
doi:10.5325/jpoststud.1.1.0026

Abstract

Making more moral decisions - an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept.Rather, we offer a theory of procedural reliability: a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we're employing are sound and reliable. We identify four broad categories of judgment to be targeted - cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A pdf is here.

Thursday, June 18, 2020

Measuring Information Preferences

E. H. Ho, D. Hagmann, & G. Loewenstein
Management Science
Published Online:13 Mar 2020

Abstract

Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economic reasons. We design and validate an information preferences scale to measure an individual’s desire to obtain or avoid information that may be unpleasant but could improve future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. In three studies incorporating responses from over 2,300 individuals, we present tests of the scale’s reliability and validity. We show that the scale predicts a real decision to obtain (or avoid) information in each of the domains as well as decisions from out-of-sample, unrelated domains. Across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available. Moreover, we find that information preferences are a stable trait but that an individual’s preference for information can differ across domains.

General Discussion

Making good decisions is often contingent on obtaining information, even when that
information is uncertain and has the potential to produce unhappiness. Substantial empirical
evidence suggests that people are often ready to make worse decisions in the service of avoiding
potentially painful information. We propose that this tendency to avoid information is a trait that
is separate from those measured previously, and developed a scale to measure it. The scale asks
respondents to imagine how they would respond to a variety of hypothetical decisions involving
information acquisition/avoidance. The predictive validity of the IPS appears to be largely driven
by its domain items, and although it incorporates domain-specific subscales, it appears to be
sufficiently universal to capture preferences for information in a broad range of domains.

The research is here.

We already knew, to some extent, that there are cases where people avoid information.  This is important in psychotherapy in which avoidance promotes confirmatory hypothesis testing, which enhances overconfidence.  We need to help people embrace information that may be inconsistent or incongruent with their worldview.