"Living a fully ethical life involves doing the most good we can. - Peter Singer
"Common sense is not so common." - Voltaire

Sunday, February 26, 2017

The Disunity of Morality

Walter Sinnott-Armstrong
In Moral Brains: The Neuroscience of Morality

Here is an excerpt:

What Is the Issue?

The question is basically whether morality is like memory. Once upon a time, philosophers and psychologists believed that memory is monolithic. Now memory is understood as a group of distinct phenomena that need to be studied separately (Tulving 2000). Memory includes not only semantic or declarative memory, such as remembering that a bat is a mammal, but also episodic memory, such as remembering seeing a bat yesterday. Memories can also be long-term or short-term (or working) memory, and procedural memory includes remembering how to do things, such as how to ride a bike.

Thus, there are many kinds of memory, and they are not unified by any common and distinctive feature. They are not even all about the past, since you can also remember timeless truths, such as that pi is 3.14159 …, and you can also remember that you have a meeting tomorrow, even if you do not remember setting up the meeting or even who set it up. These kinds of memory differ not only in their psychological profiles and functions but also in their neural basis, as shown by both fMRI and by patients, such as H. M., whose brain lesions left him with severely impaired episodic memory but largely intact procedural and semantic memory. Such findings led most experts to accept that memory
is not unified.

This recognition enabled progress. Neuroscientists could never find a neural basis for memory as such while they lumped together all kinds of memory. Psychologists could never formulate reliable generalizations about memory as long as they failed to distinguish kinds of memories. And philosophers could never settle how memory is justified if they conflated remembering facts and remembering how to ride a bicycle. Although these problems remain hard, progress became easier after recognizing that memory is not a single natural kind.

My thesis is that morality is like memory. Neither of them is unified, and admitting disunity makes progress possible in both areas. Moral neuroscience, psychology, and philosophy will become much more precise and productive if they give up the assumption that moral judgments all share a distinctive essence.

The book chapter is here.

Why We Love Moral Rigidity

Matthew Hutson
Scientific American
Originally published on November 1, 2016

Here is an excerpt:

We don't evaluate others based on their philosophical ideologies per se, Pizarro says. Rather we look at how others' moral decisions “express the kind of motives, commitments and emotions we want people to have.” Coolheaded calculation has its benefits, but we want our friends to at least flinch before personally harming others. Indeed, people in the study who had argued for pushing the man were trusted more when they claimed that the decision was difficult.

Politicians and executives should pay heed. Leading requires making hard trade-offs—is a war or a cut in employee benefits worth the pain it inflicts? According to Pizarro, “you want your leader to genuinely have or at least be really good at displaying the right kinds of emotions when they're talking about that decision, to show that they didn't arrive at it callously.” Calmly weighing costs and benefits may do the most good for the most people, but it can also be a good way to lose friends.

The article is here.

Saturday, February 25, 2017

Sorry is Never Enough: The Effect of State Apology Laws on Medical Malpractice Liability Risk

Benjamin J. McMichaela, R. Lawrence Van Hornb, & W. Kip Viscusic

Abstract:
 
State apology laws offer a separate avenue from traditional damages-centric tort reforms to promote communication between physicians and patients and to address potential medical malpractice liability. These laws facilitate apologies from physicians by excluding statements of apology from malpractice trials. Using a unique dataset that includes all malpractice claims for 90% of physicians practicing in a single specialty across the country, this study examines whether apology laws limit malpractice risk. For physicians who do not regularly perform surgery, apology laws increase the probability of facing a lawsuit and increase the average payment made to resolve a claim. For surgeons, apology laws do not have a substantial effect on the probability of facing a claim or the average payment made to resolve a claim. Overall, the evidence suggests that apology laws do not effectively limit medical malpractice liability risk.

The article is here.

Friday, February 24, 2017

Make business ethics a cumulative science

Jonathan Haidt & Linda Trevino
Nature - Human Behavior


Business ethics research is not currently a cumulative science, but it must become one. The benefits to humanity from research that helps firms improve their ethics could be enormous, especially if that research also shows that strong ethics improves the effectiveness of companies.

Imagine a world in which medical researchers did experiments on rats, but never on people. Furthermore, suppose that doctors ignored the rat literature entirely. Instead, they talked to each other and swapped tips, based on their own clinical experience. In such a world medicine would not be the cumulative science that we know today.

That fanciful clinical world is the world of business ethics research. University researchers do experiments, mostly on students who come into the lab for pay or course credit. Experiments are run carefully, social and cognitive processes are elucidated, and articles get published in academic journals. But business leaders do not read these journals, and rarely even read about the studies second-hand. Instead, when they think and talk about ethics, they rely on their own experience, and the experience of their friends. CEOs share their insights on ethical leadership. Ethics and compliance officers meet at conferences to swap ‘best practices’ that haven't been research-tested. There are fads, but there is no clear progress.

The article is here.

Why Are Conservatives More Punitive Than Liberals? A Moral Foundations Approach.

Jasmine R. Silver and Eric Silver
Law and Human Behavior, Feb 02 , 2017,

Morality is thought to underlie both ideological and punitive attitudes. In particular, moral foundations research suggests that group-oriented moral concerns promote a conservative orientation, while individual-oriented moral concerns promote a liberal orientation (Graham, Haidt, & Nosek, 2009). Drawing on classical sociological theory, we argue that endorsement of group-oriented moral concerns also elicits higher levels of punitiveness by promoting a view of crime as being perpetrated against society, while endorsement of individual-oriented moral concerns reduces punitiveness by directing attention toward the welfare of offenders as well as victims. Data from 2 independent samples (N = 1,464 and N = 1,025) showed that endorsement of group-oriented moral concerns was associated with more punitive and more conservative attitudes, while endorsement of individual-oriented moral concerns was associated with less punitive and less conservative attitudes. These results suggest that the association between conservatism and punitiveness is in part spurious because of their grounding in the moral foundations. Consequently, studies that do not take the moral foundations into account are at risk of overstating the relationship between conservatism and punitiveness.

The abstract is here.

Thursday, February 23, 2017

Equipoise in Research: Integrating Ethics and Science in Human Research

Alex John London
JAMA. 2017;317(5):525-526. doi:10.1001/jama.2017.0016

The principle of equipoise states that, when there is uncertainty or conflicting expert opinion about the relative merits of diagnostic, prevention, or treatment options, allocating interventions to individuals in a manner that allows the generation of new knowledge (eg, randomization) is ethically permissible. The principle of equipoise reconciles 2 potentially conflicting ethical imperatives: to ensure that research involving human participants generates scientifically sound and clinically relevant information while demonstrating proper respect and concern for the rights and interests of study participants.

The article is here.

How To Spot A Fake Science News Story

Alex Berezow
American Council on Science and Health
Originally published January 31, 2017

Here is an excerpt:

How to Detect a Fake Science News Story

Often, I have been asked, "How can you tell if a science story isn't legitimate?" Here are some red flags:

1) The article is very similar to the press release on which it was based. This indicates whether the article is science journalism or just public relations.

2) The article makes no attempt to explain methodology or avoids using any technical terminology. (This indicates the author may be incapable of understanding the original paper.)

3) The article does not indicate any limitations on the conclusions of the research. (For example, a study conducted entirely in mice cannot be used to draw firm conclusions about humans.)

4) The article treats established scientific facts and fringe ideas on equal terms.

5) The article is sensationalized; i.e., it draws huge, sweeping conclusions from a single study. (This is particularly common in stories on scary chemicals and miracle vegetables.)

6) The article fails to separate scientific evidence from science policy. Reasonable people should be able to agree on the former while debating the latter. (This arises from the fact that people ascribe to different values and priorities.)

The article is here.

Wednesday, February 22, 2017

It's time for some messy, democratic discussions about the future of AI

Jack Stilgoe and Andrew Maynard
The Guardian
Originally posted February 1, 2017

Here is an excerpt:

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

But avoiding awkward public conversations helps nobody. Scientists are more inclined to guess at what the public are worried about than to ask them, which can lead to some serious blind spots – not necessarily in scientific understanding (although this too can occur), but in the direction and nature of research and development.

The article is here.

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Ståhl T, Zaal MP, Skitka LJ (2016)
PLoS ONE 11(11): e0166332. doi:10.1371/journal.pone.0166332

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The article is here.