Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 4, 2019

Veterans Must Also Heal From Moral Injury After War

Camillo Mac Bica
truthout.org
Originally published Nov 11, 2019

Here are two excerpts:

Humankind has identified and internalized a set of values and norms through which we define ourselves as persons, structure our world and render our relationship to it — and to other human beings — comprehensible. These values and norms provide the parameters of our being: our moral identity. Consequently, we now have the need and the means to weigh concrete situations to determine acceptable (right) and unacceptable (wrong) behavior.

Whether an individual chooses to act rightly or wrongly, according to or in violation of her moral identity, will affect whether she perceives herself as true to her personal convictions and to others in the moral community who share her values and ideals. As the moral gravity of one’s actions and experiences on the battlefield becomes apparent, a warrior may suffer profound moral confusion and distress at having transgressed her moral foundations, her moral identity.

Guilt is, simply speaking, the awareness of having transgressed one’s moral convictions and the anxiety precipitated by a perceived breakdown of one’s ethical cohesion — one’s integrity — and an alienation from the moral community. Shame is the loss of self-esteem consequent to a failure to live up to personal and communal expectations.

(cut)

Having completed the necessary philosophical and psychological groundwork, veterans can now begin the very difficult task of confronting the experience. That is, of remembering, reassessing and morally reevaluating their responsibility and culpability for their perceived transgressions on the battlefield.

Reassessing their behavior in combat within the parameters of their increased philosophical and psychological awareness, veterans realize that the programming to which they were subjected and the experience of war as a survival situation are causally connected to those specific battlefield incidents and behaviors, theirs and/or others’, that weigh heavily on their consciences — their moral injury. As a consequence, they understand these influences as extenuating circumstances.

Finally, as they morally reevaluate their actions in war, they see these incidents and behaviors in combat not as justifiable, but as understandable, perhaps even excusable, and their culpability mitigated by the fact that those who determined policy, sent them to war, issued the orders, and allowed the war to occur and/or to continue unchallenged must share responsibility for the crimes and horror that inevitably characterize war.

The info is here.

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
venturebeat.com
Originally published Nov 11, 2109


Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Image result for moral emotionsCameron, C. D., Lindquist, K. A., & Gray K.
Pers Soc Psychol Rev. 
2015 Nov;19(4):371-94.
doi: 10.1177/1088868314566683.

Abstract

Morality and emotions are linked, but what is the nature of their correspondence? Many "whole number" accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality-emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality-emotion links.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.

Neuroscientific evidence in the courtroom: a review.

Image result for neuroscience evidence in the courtroom"Aono, D., Yaffe, G. & Kober, H.
Cogn. Research 4, 40 (2019)
doi:10.1186/s41235-019-0179-y

Abstract

The use of neuroscience in the courtroom can be traced back to the early twentieth century. However, the use of neuroscientific evidence in criminal proceedings has increased significantly over the last two decades. This rapid increase has raised questions, among the media as well as the legal and scientific communities, regarding the effects that such evidence could have on legal decision makers. In this article, we first outline the history of neuroscientific evidence in courtrooms and then we provide a review of recent research investigating the effects of neuroscientific evidence on decision-making broadly, and on legal decisions specifically. In the latter case, we review studies that measure the effect of neuroscientific evidence (both imaging and nonimaging) on verdicts, sentencing recommendations, and beliefs of mock jurors and judges presented with a criminal case. Overall, the reviewed studies suggest mitigating effects of neuroscientific evidence on some legal decisions (e.g., the death penalty). Furthermore, factors such as mental disorder diagnoses and perceived dangerousness might moderate the mitigating effect of such evidence. Importantly, neuroscientific evidence that includes images of the brain does not appear to have an especially persuasive effect (compared with other neuroscientific evidence that does not include an image). Future directions for research are discussed, with a specific call for studies that vary defendant characteristics, the nature of the crime, and a juror’s perception of the defendant, in order to better understand the roles of moderating factors and cognitive mediators of persuasion.

Significance

The increased use of neuroscientific evidence in criminal proceedings has led some to wonder what effects such evidence has on legal decision makers (e.g., jurors and judges) who may be unfamiliar with neuroscience. There is some concern that legal decision makers may be unduly influenced by testimony and images related to the defendant’s brain. This paper briefly reviews the history of neuroscientific evidence in the courtroom to provide context for its current use. It then reviews the current research examining the influence of neuroscientific evidence on legal decision makers and potential moderators of such effects. Our synthesis of the findings suggests that neuroscientific evidence has some mitigating effects on legal decisions, although neuroimaging-based evidence does not hold any special persuasive power. With this in mind, we provide recommendations for future research in this area. Our review and conclusions have implications for scientists, legal scholars, judges, and jurors, who could all benefit from understanding the influence of neuroscientific evidence on judgments in criminal cases.

Sunday, December 1, 2019

Moral Reasoning and Emotion

Joshua May & Victor Kumar
Published in
The Routledge Handbook of Moral Epistemology,
eds. Karen Jones, Mark Timmons, and
Aaron Zimmerman, Routledge (2018), pp. 139-156.

Abstract:

This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it.

(cut)

Conclusion

We draw two main conclusions. First, on a fair and plausible characterization of reasoning and emotion, they are both integral to moral judgment. In particular, when our moral beliefs undergo changes over long periods of time, there is ample space for both reasoning and emotion to play an iterative role. Second, it’s difficult to cleave reasoning from emotional processing. When the two affect moral judgment, especially across time, their interplay can make it artificial or fruitless to impose a division, even if a distinction can still be drawn between inference and valence in information processing.

To some degree, our conclusions militate against extreme characterizations of the rationalism-sentimentalism divide. However, the debate is best construed as a question about which psychological process is more fundamental or essential to distinctively moral cognition.  The answer still affects both theoretical and practical problems, such as how to make artificial intelligence capable of moral judgment. At the moment, the more nuanced dispute is difficult to adjudicate, but it may be addressed by further research and theorizing.

The book chapter can be downloaded here.

Saturday, November 30, 2019

Are You a Moral Grandstander?

Image result for moral superiorityScott Barry Kaufman
Scientific American
Originally published October 28, 2019

Here are two excerpts:

Do you strongly agree with the following statements?

  • When I share my moral/political beliefs, I do so to show people who disagree with me that I am better than them.
  • I share my moral/political beliefs to make people who disagree with me feel bad.
  • When I share my moral/political beliefs, I do so in the hopes that people different than me will feel ashamed of their beliefs.

If so, then you may be a card-carrying moral grandstander. Of course it's wonderful to have a social cause that you believe in genuinely, and which you want to share with the world to make it a better place. But moral grandstanding comes from a different place.

(cut)

Nevertheless, since we are such a social species, the human need for social status is very pervasive, and often our attempts at sharing our moral and political beliefs on public social media platforms involve a mix of genuine motives with social status motives. As one team of psychologists put it, yes, you probably are "virtue signaling" (a closely related concept to moral grandstanding), but that doesn't mean that your outrage is necessarily inauthentic. It just means that we often have a subconscious desire to signal our virtue, which when not checked, can spiral out of control and cause us to denigrate or be mean to others in order to satisfy that desire. When the need for status predominates, we may even lose touch with what we truly believe, or even what is actually the truth.

The info is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.