Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 21, 2019

An ethicist weighs in on our moral failure to act on climate change

Monique Deveaux
The Conversation
Originally published September 26, 2019

Here is an excerpt:

This call to collective moral and political responsibility is exactly right. As individuals, we can all be held accountable for helping to stop the undeniable environmental harms around us and the catastrophic threat posed by rising levels of CO2 and other greenhouse gases. Those of us with a degree of privilege and influence have an even greater responsibility to assist and advocate on behalf of those most vulnerable to the effects of global warming.

This group includes children everywhere whose futures are uncertain at best, terrifying at worst. It also includes those who are already suffering from severe weather events and rising water levels caused by global warming, and communities dispossessed by fossil fuel extraction. Indigenous peoples around the globe whose lands and water systems are being confiscated and polluted in the search for ever more sources of oil, gas and coal are owed our support and assistance. So are marginalized communities displaced by mountaintop removal and destructive dam energy projects, climate refugees and many others.

The message of climate activists is that we can't fulfill our responsibilities simply by making green choices as consumers or expressing support for their cause. The late American political philosopher Iris Young thought that we could only discharge our "political responsibility for injustice," as she put it, through collective political action.

The interests of the powerful, she warned, conflict with the political responsibility to take actions that challenge the status quo—but which are necessary to reverse injustices.

As the striking school children and older climate activists everywhere have repeatedly pointed out, political leaders have so far failed to enact the carbon emissions reduction policies that are so desperately needed. Despite UN Secretary General António Guterres' sombre words of warning at the Climate Action Summit, the UN is largely powerless in the face of governments that refuse to enact meaningful carbon-reducing policies, such as China and the U.S.

The info is here.

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Saturday, October 19, 2019

Forensic Clinicians’ Understanding of Bias

Tess Neal, Nina MacLean, Robert D. Morgan,
and Daniel C. Murrie
Psychology, Public Policy, and Law, 
Sep 16 , 2019, No Pagination Specified

Abstract:

Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.

Here is the conclusion:

These findings (along with Neal & Brodsky’s, 2016) suggest that forensic clinicians are in need of additional training not only to recognize biases but perhaps to begin to effectively mitigate harm from biases. For example, in predoctoral (e.g., internship) and postdoctoral (fellowships), didactic training could address bias, recognizing bias and providing strategies for minimizing bias. Additionally, supervisors could address identifying and reducing bias as a regular part of supervision (e.g., by including this as part of case conceptualization). However, further research is needed to determine the types of training and workflow strategies that best reduce bias. Future studies should focus on experimentally examining the presence of biases and ways to mitigate their effects in forensic evaluations.

The research is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

The Koch-backed right-to-try law has been a bust, but still threatens our health

Michael Hiltzik
The Los Angeles Times
Originally posted September 17, 2019

The federal right-to-try law, signed by President Trump in May 2018 as a sop to right-wing interests, including the Koch brothers network, always was a cruel sham perpetrated on sufferers of intractably fatal diseases.

As we’ve reported, the law was promoted as a compassionate path to experimental treatments for those patients — but in fact was a cynical ploy aimed at emasculating the Food and Drug Administration in a way that would undermine public health and harm all patients.

Now that a year has passed since the law’s enactment, the assessments of how it has functioned are beginning to flow in. As NYU bioethicist Arthur Caplan observed to Ed Silverman’s Pharmalot blog, “the right to try remains a bust.”

His judgment is seconded by the veteran pseudoscience debunker David Gorski, who writes: “Right-to-try has been a spectacular failure thus far at getting terminally ill patients access to experimental drugs.”

That should come as no surprise, Gorski adds, because “right-to-try was never about helping terminally ill patients. ... It was always about ideology more than anything else. It was always about weakening the FDA’s ability to regulate drug approval.”

The info is here.

Thursday, October 17, 2019

AI ethics and the limits of code(s)

Machine learningGeoff Mulgan
nesta.org.uk
Originally published September 16, 2019

Here is an excerpt:

1. Ethics involve context and interpretation - not just deduction from codes.

Too much writing about AI ethics uses a misleading model of what ethics means in practice. It assumes that ethics can be distilled into principles from which conclusions can then be deduced, like a code. The last few years have brought a glut of lists of principles (including some produced by colleagues at Nesta). Various overviews have been attempted in recent years. A recent AI Ethics Guidelines Global Inventory collects over 80 different ethical frameworks. There’s nothing wrong with any of them and all are perfectly sensible and reasonable. But this isn’t how most ethical reasoning happens. The lists assume that ethics is largely deductive, when in fact it is interpretive and context specific, as is wisdom. One basic reason is that the principles often point in opposite directions - for example, autonomy, justice and transparency. Indeed, this is also the lesson of medical ethics over many decades. Intense conversation about specific examples, working through difficult ambiguities and contradictions, counts for a lot more than generic principles.

The info is here.

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Wednesday, October 16, 2019

Birmingham psychologist defrauded state Medicaid of more than $1.5 million, authorities say

Carol Robinson
Sharon Waltz
al.com
Originally published August 15, 2019

A Birmingham psychologist has been charged with defrauding the Alabama Medicaid Agency of more than $1 million by filing false claims for counseling services that were not provided.

Sharon D. Waltz, 50, has agreed to plead guilty to the charge and pay restitution in the amount of $1.5 million, according to a joint announcement Thursday by Northern District of Alabama U.S. Attorney Jay Town, Department of Health and Human Services -Office of Inspector General Special Agent Derrick L. Jackson and Alabama Attorney General Steve Marshall.

“The greed of this defendant deprived mental health care to many at-risk young people in Alabama, with the focus on profit rather than the efficacy of care,” Town said. “The costs are not just monetary but have social and health impacts on the entire Northern District. This prosecution, and this investigation, demonstrates what is possible when federal and state law enforcement agencies work together.”

The info is here.