Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

The Role of Emotion Regulation in Moral Judgment

Helion, C. & Ochsner, K.N.
Neuroethics (2018) 11: 297.
https://doi.org/10.1007/s12152-016-9261-z

Abstract

Moral judgment has typically been characterized as a conflict between emotion and reason. In recent years, a central concern has been determining which process is the chief contributor to moral behavior. While classic moral theorists claimed that moral evaluations stem from consciously controlled cognitive processes, recent research indicates that affective processes may be driving moral behavior. Here, we propose a new way of thinking about emotion within the context of moral judgment, one in which affect is generated and transformed by both automatic and controlled processes, and moral evaluations are shifted accordingly. We begin with a review of how existing theories in psychology and neuroscience address the interaction between emotion and cognition, and how these theories may inform the study of moral judgment. We then describe how brain regions involved in both affective processing and moral judgment overlap and may make distinct contributions to the moral evaluation process. Finally, we discuss how this way of thinking about emotion can be reconciled with current theories in moral psychology before mapping out future directions in the study of moral behavior.

Here is an excerpt:

Individuals may up- or down- regulate their automatic emotional responses to moral stimuli in a way that encourages goal-consistent behavior. For example, individuals may down-regulate their disgust when evaluating dilemmas in which disgusting acts occurred but no one was harmed, or they may up-regulate anger when engaging in punishment or assigning blame. To observe this effect in the wild, one need go no further than the modern political arena. Someone who is politically liberal may be as disgusted by the thought of two men kissing as someone who is politically conservative, but may choose to down-regulate their response so that it is more in line with their political views [44]. They can do this in multiple ways, including reframing the situation as one about equality and fairness, construing the act as one of love and affection, or manipulating personal relevance by thinking about homosexual individuals whom the person knows. This affective transformation would rely on controlled emotional processes that shape the initial automatically elicited emotion (disgust) into a very different emotion (tolerance or acceptance). This process requires motivation, recognition (conscious or non-conscious) that one is experiencing an emotion that is in conflict with ones goals and ideals, and a reconstruction of the situation and one’s emotions in order to come to a moral resolution. Comparatively, political conservatives may be less motivated to do so, and may instead up-regulate their disgust response so that their moral judgment is in line with their overarching goals. In contrast, the opposite regulatory pattern may occur (such that liberals up-regulate emotion and conservatives down-regulate emotion) when considering issues like the death penalty or gun control.

Monday, February 25, 2019

A philosopher’s life

Margaret Nagle
UMaineToday
Fall/Winter 2018

Here is an excerpt:

Mention philosophy and for most people, images of the bearded philosophers of Ancient Greece pontificating in the marketplace come to mind. Today, philosophers are still in public arenas, Miller says, but now that engagement with society is in K–12 education, medicine, government, corporations, environmental issues and so much more. Public philosophers are students of community knowledge, learning as much as they teach.

The field of clinical ethics, which helps patients, families and clinicians address ethical issues that arise in health care, emerged in recent decades as medical decisions became more complex in an increasingly technological society. Those questions can range from when to stop aggressive medical intervention to whether expressed breast milk from a patient who uses medical marijuana should be given to her baby in the neonatal intensive care unit.

As a clinical ethicist, Miller provides training and consultation for physicians, nurses and other medical personnel. She also may be called on to consult with patients and their family members. Unlike urban areas where a city hospital may have a whole department devoted to clinical ethics, rural health care settings often struggle to find such philosophy-focused resources.

That’s why Miller does what she does in Maine.

Miller focuses on “building clinical ethics capacity” in the state’s rural health care settings, providing training, connecting hospital personnel to readings and resources, and facilitating opportunities to maintain ongoing exploration of critical issues.

The article is here.

Information Processing Biases in the Brain: Implications for Decision-Making and Self-Governance

Sali, A.W., Anderson, B.A. & Courtney, S.M.
Neuroethics (2018) 11: 259.
https://doi.org/10.1007/s12152-016-9251-1

Abstract

To make behavioral choices that are in line with our goals and our moral beliefs, we need to gather and consider information about our current situation. Most information present in our environment is not relevant to the choices we need or would want to make and thus could interfere with our ability to behave in ways that reflect our underlying values. Certain sources of information could even lead us to make choices we later regret, and thus it would be beneficial to be able to ignore that information. Our ability to exert successful self-governance depends on our ability to attend to sources of information that we deem important to our decision-making processes. We generally assume that, at any moment, we have the ability to choose what we pay attention to. However, recent research indicates that what we pay attention to is influenced by our prior experiences, including reward history and past successes and failures, even when we are not aware of this history. Even momentary distractions can cause us to miss or discount information that should have a greater influence on our decisions given our values. Such biases in attention thus raise questions about the degree to which the choices that we make may be poorly informed and not truly reflect our ability to otherwise exert self-governance.

Here is part of the Conclusion:

In order to consistently make decisions that reflect our goals and values, we need to gather the information necessary to guide these decisions, and ignore information that is irrelevant. Although the momentary acquisition of irrelevant information will not likely change our goals, biases in attentional selection may still profoundly influence behavioral outcomes, tipping the balance between competing options when faced with a single goal (e.g., save the least competent swimmer) or between simultaneously competing goals (e.g., relieve drug craving and withdrawal symptoms vs. maintain abstinence). An important component of self-governance might, therefore, be the ability to exert control over how we represent our world as we consider different potential courses of action.

Sunday, February 24, 2019

Biased algorithms: here’s a more radical approach to creating fairness

Tom Douglas
theconversation.com
Originally posted January 21, 2019

Here is an excerpt:

What’s fair?

AI researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair – fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?

There is usually a tension between procedural fairness and accuracy – attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.

We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have. For example, instead of trying to ensure that crime prediction errors affect different racial groups equally – a goal that may in any case be unattainable – we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.

The info is here.

Saturday, February 23, 2019

The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen
Personality and Social Psychology Review, 1–35

Abstract

We review empirical research on (social) psychology of morality to identify which issues and relations are well documented by existing data and which areas of inquiry are in need of further empirical evidence. An electronic literature search yielded a total of 1,278 relevant research articles published from 1940 through 2017. These were subjected to expert content analysis and standardized bibliometric analysis to classify research questions and relate these to (trends in) empirical approaches that characterize research on morality. We categorize the research questions addressed in this literature into five different themes and consider how empirical approaches within each of these themes have addressed psychological antecedents and implications of moral behavior. We conclude that some key features of theoretical questions relating to human morality are not systematically captured in empirical research and are in need of further investigation.

Here is a portion of the article:

In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying
responses to maintain moral self-views.

The review article can be downloaded here.

Friday, February 22, 2019

Choices

Christy Shake
Calvin's Story Blog
Originally published February 13, 2019

Here is an excerpt:

If Michael and I had known early on of Calvin's malformed brain, and had we known the dreadful extent to which it might impact his well-being and quality of life, his development, cognition, coordination, communication, vision, ability to move about and function independently, and his increased odds of having unstoppable seizures, or of being abused by caregivers, would we have chosen to terminate my pregnancy? I really can't say. But one thing I do know with certainty: it is torturous to see Calvin suffer on a daily basis, to see him seize repeatedly, sometimes for several consecutive days, bite his cheek so bad it bleeds, see terror in his eyes and malaise on his face, be a veritable guinea pig for neurologists and me, endure the miseries of antiepileptic drugs and their heinous side effects, to see him hurt so needlessly.

Especially during rough stints, it's hard not to imagine how life might have been—perhaps easier, calmer, happier, less restricted, less anxious, less heartbreaking—if Calvin had never come into this world. One moment I lament his existence and the next I wonder what I would do without him. And though Calvin brings me immense joy at times, and though he is as precious to me as any mother's child could be, our lives have been profoundly strained by his existence. All three of us suffer, but none more than our sweet Calvin. Life with him, worrying about and watching him endure his maladies—despite, or perhaps owing to, the fact I love him immeasurably—is such a painful and burdensome endeavor that at times I regret ever deciding to have a child.

The blog post is here.

Facebook Backs University AI Ethics Institute With $7.5 Million

Sam Shead
Forbes.com
Originally posted January 20, 2019

Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said.

AI is poised to have a profound impact on areas like climate change and healthcare but it has its risks.

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph Lütge, who will lead the institute.

"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."

The info is here.

Thursday, February 21, 2019

Federal ethics agency refuses to certify financial disclosure from Commerce Secretary Wilbur Ross

Wilbur RossJeff Daniels
CNBC.com
Originally published February 19, 2019

The government's top ethics watchdog disclosed Tuesday that it had refused to certify a financial disclosure report from Commerce Secretary Wilbur Ross.

In a filing, the Office of Government Ethics said it wouldn't certify the 2018 annual filing by Ross because he didn't divest stock in a bank despite stating otherwise. The move could have legal ramifications for Ross and add to pressure for a federal probe.

"The report is not certified," OGE Director Emory Rounds said in a filing, explaining that a previous document the watchdog received from Ross indicated he "no longer held BankUnited stock." However, Rounds said an Oct. 31 document "demonstrates that he did" still hold the shares and as a result, "the filer was therefore not in compliance with his ethics agreement at the time of the report."

A federal ethics agreement required that Ross divest stock worth between $1,000 and $15,000 in BankUnited by the end of May 2017, or within 90 days of the Senate confirming him to the Commerce post. He previously reported selling the stock twice, first in May 2017 and again in August 2018 as part of an annual disclosure required by OGE.

The info is here.