Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

‘Three Identical Strangers’: The high cost of experimentation without ethics

Barron H. Lerner
The Washington Post
Originally published January 27, 2019

Here is an excerpt:

Injunctions against unethical research go back at least to the mid-19th century, when the French scientist Claude Bernard admonished his fellow investigators never to do an experiment that might harm a single person, even if the result would be highly advantageous to science and the health of others. Yet despite Bernard’s admonition, the next century was replete with experiments that put orphans, prisoners, minorities and other vulnerable populations at risk for the sake of scientific discovery. Medical progress often came at too high a human cost, something the CNN documentary exposes.

Human experimentation surged during World War II as American scientists raced to find treatments for diseases encountered on the battlefield. This experimental enthusiasm continued into the Cold War years, as the United States competed with the Soviet Union for scientific knowledge. In both eras, a utilitarian mind-set trumped concerns about research subjects.

That the experiments continued after the war was especially ironic given the response to the atrocities committed by Nazi physicians in concentration camps. There, doctors performed horrific experiments designed to help German soldiers who faced extreme conditions on the battlefield. This research included deliberately freezing inmates, forcing them to ingest only seawater and amputating their limbs.

The info is here.

Wednesday, February 27, 2019

Business Ethics And Integrity: It Starts With The Tone At The Top

Betsy Atkins
Forbes.com
Originally posted 7, 2019

Here is the conclusion:

Transparency leads to empowerment:

Share your successes and your failures and look to everyone to help build a better company.  By including everyone, you create the illusive “we” that is the essence of company culture.  Transparency leads to a company culture that creates an outcome because the CEO creates a bigger purpose for the organization than just making money or reaching quarterly numbers.  Company culture guru Kenneth Kurtzman author of Common Purpose said it best when he said “CEOs need to know how to read their organizations’ emotional tone and need to engage behaviors that build trust including leading-by-listening, building bridges, showing compassion and caring, demonstrating their own commitment to the organization, and giving employees the authority to do their job while inspiring them to do their best work.”

There is no substitute for CEO leadership in creating a company culture of integrity.  A board that supports the CEO in building a company culture of integrity, transparency, and collaboration will be supporting a successful company.

The info is here.

How People Judge What Is Reasonable

Kevin P. Tobia
Alabama Law Review, Vol. 70, 293-359 (2018)

Abstract

A classic debate concerns whether reasonableness should be understood statistically (e.g., reasonableness is what is common) or prescriptively (e.g., reasonableness is what is good). This Article elaborates and defends a third possibility. Reasonableness is a partly statistical and partly prescriptive “hybrid,” reflecting both statistical and prescriptive considerations. Experiments reveal that people apply reasonableness as a hybrid concept, and the Article argues that a hybrid account offers the best general theory of reasonableness.

First, the Article investigates how ordinary people judge what is reasonable. Reasonableness sits at the core of countless legal standards, yet little work has investigated how ordinary people (i.e., potential jurors) actually make reasonableness judgments. Experiments reveal that judgments of reasonableness are systematically intermediate between judgments of the relevant average and ideal across numerous legal domains. For example, participants’ mean judgment of the legally reasonable number of weeks’ delay before a criminal trial (ten weeks) falls between the judged average (seventeen weeks) and ideal (seven weeks). So too for the reasonable number of days to accept a contract offer, the reasonable rate of attorneys’ fees, the reasonable loan interest rate, and the reasonable annual number of loud events on a football field in a residential neighborhood. Judgment of reasonableness is better predicted by both statistical and prescriptive factors than by either factor alone.

This Article uses this experimental discovery to develop a normative view of reasonableness. It elaborates an account of reasonableness as a hybrid standard, arguing that this view offers the best general theory of reasonableness, one that applies correctly across multiple legal domains. Moreover, this hybrid feature is the historical essence of legal reasonableness: the original use of the “reasonable person” and the “man on the Clapham omnibus” aimed to reflect both statistical and prescriptive considerations. Empirically, reasonableness is a hybrid judgment. And normatively, reasonableness should be applied as a hybrid standard.

The paper is here.

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

The Role of Emotion Regulation in Moral Judgment

Helion, C. & Ochsner, K.N.
Neuroethics (2018) 11: 297.
https://doi.org/10.1007/s12152-016-9261-z

Abstract

Moral judgment has typically been characterized as a conflict between emotion and reason. In recent years, a central concern has been determining which process is the chief contributor to moral behavior. While classic moral theorists claimed that moral evaluations stem from consciously controlled cognitive processes, recent research indicates that affective processes may be driving moral behavior. Here, we propose a new way of thinking about emotion within the context of moral judgment, one in which affect is generated and transformed by both automatic and controlled processes, and moral evaluations are shifted accordingly. We begin with a review of how existing theories in psychology and neuroscience address the interaction between emotion and cognition, and how these theories may inform the study of moral judgment. We then describe how brain regions involved in both affective processing and moral judgment overlap and may make distinct contributions to the moral evaluation process. Finally, we discuss how this way of thinking about emotion can be reconciled with current theories in moral psychology before mapping out future directions in the study of moral behavior.

Here is an excerpt:

Individuals may up- or down- regulate their automatic emotional responses to moral stimuli in a way that encourages goal-consistent behavior. For example, individuals may down-regulate their disgust when evaluating dilemmas in which disgusting acts occurred but no one was harmed, or they may up-regulate anger when engaging in punishment or assigning blame. To observe this effect in the wild, one need go no further than the modern political arena. Someone who is politically liberal may be as disgusted by the thought of two men kissing as someone who is politically conservative, but may choose to down-regulate their response so that it is more in line with their political views [44]. They can do this in multiple ways, including reframing the situation as one about equality and fairness, construing the act as one of love and affection, or manipulating personal relevance by thinking about homosexual individuals whom the person knows. This affective transformation would rely on controlled emotional processes that shape the initial automatically elicited emotion (disgust) into a very different emotion (tolerance or acceptance). This process requires motivation, recognition (conscious or non-conscious) that one is experiencing an emotion that is in conflict with ones goals and ideals, and a reconstruction of the situation and one’s emotions in order to come to a moral resolution. Comparatively, political conservatives may be less motivated to do so, and may instead up-regulate their disgust response so that their moral judgment is in line with their overarching goals. In contrast, the opposite regulatory pattern may occur (such that liberals up-regulate emotion and conservatives down-regulate emotion) when considering issues like the death penalty or gun control.

Monday, February 25, 2019

A philosopher’s life

Margaret Nagle
UMaineToday
Fall/Winter 2018

Here is an excerpt:

Mention philosophy and for most people, images of the bearded philosophers of Ancient Greece pontificating in the marketplace come to mind. Today, philosophers are still in public arenas, Miller says, but now that engagement with society is in K–12 education, medicine, government, corporations, environmental issues and so much more. Public philosophers are students of community knowledge, learning as much as they teach.

The field of clinical ethics, which helps patients, families and clinicians address ethical issues that arise in health care, emerged in recent decades as medical decisions became more complex in an increasingly technological society. Those questions can range from when to stop aggressive medical intervention to whether expressed breast milk from a patient who uses medical marijuana should be given to her baby in the neonatal intensive care unit.

As a clinical ethicist, Miller provides training and consultation for physicians, nurses and other medical personnel. She also may be called on to consult with patients and their family members. Unlike urban areas where a city hospital may have a whole department devoted to clinical ethics, rural health care settings often struggle to find such philosophy-focused resources.

That’s why Miller does what she does in Maine.

Miller focuses on “building clinical ethics capacity” in the state’s rural health care settings, providing training, connecting hospital personnel to readings and resources, and facilitating opportunities to maintain ongoing exploration of critical issues.

The article is here.

Information Processing Biases in the Brain: Implications for Decision-Making and Self-Governance

Sali, A.W., Anderson, B.A. & Courtney, S.M.
Neuroethics (2018) 11: 259.
https://doi.org/10.1007/s12152-016-9251-1

Abstract

To make behavioral choices that are in line with our goals and our moral beliefs, we need to gather and consider information about our current situation. Most information present in our environment is not relevant to the choices we need or would want to make and thus could interfere with our ability to behave in ways that reflect our underlying values. Certain sources of information could even lead us to make choices we later regret, and thus it would be beneficial to be able to ignore that information. Our ability to exert successful self-governance depends on our ability to attend to sources of information that we deem important to our decision-making processes. We generally assume that, at any moment, we have the ability to choose what we pay attention to. However, recent research indicates that what we pay attention to is influenced by our prior experiences, including reward history and past successes and failures, even when we are not aware of this history. Even momentary distractions can cause us to miss or discount information that should have a greater influence on our decisions given our values. Such biases in attention thus raise questions about the degree to which the choices that we make may be poorly informed and not truly reflect our ability to otherwise exert self-governance.

Here is part of the Conclusion:

In order to consistently make decisions that reflect our goals and values, we need to gather the information necessary to guide these decisions, and ignore information that is irrelevant. Although the momentary acquisition of irrelevant information will not likely change our goals, biases in attentional selection may still profoundly influence behavioral outcomes, tipping the balance between competing options when faced with a single goal (e.g., save the least competent swimmer) or between simultaneously competing goals (e.g., relieve drug craving and withdrawal symptoms vs. maintain abstinence). An important component of self-governance might, therefore, be the ability to exert control over how we represent our world as we consider different potential courses of action.

Sunday, February 24, 2019

Biased algorithms: here’s a more radical approach to creating fairness

Tom Douglas
theconversation.com
Originally posted January 21, 2019

Here is an excerpt:

What’s fair?

AI researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair – fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?

There is usually a tension between procedural fairness and accuracy – attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.

We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have. For example, instead of trying to ensure that crime prediction errors affect different racial groups equally – a goal that may in any case be unattainable – we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.

The info is here.