Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, June 21, 2018

Wells Fargo's ethics hotline calls are on the rise

Matt Egan
CNN.com
Originally posted June 19, 2018

A top Wells Fargo (WFC) executive said on Tuesday that employees are increasingly using the bank's confidential hotline to report bad behavior.

"Our volumes increased on our ethics line. We're glad they did. People raised their hand," said Theresa LaPlaca, who leads a conduct office that Wells Fargo created last year.

"That is success for me," LaPlaca said at the ACFE Global Fraud Conference in Las Vegas.

Reassuring Wells Fargo workers to trust the bank's ethics hotline is no easy task. Nearly half a dozen workers told CNNMoney in 2016 that they were fired by Wells Fargo after calling the hotline to try to stop the bank's fake-account problem.

Last year, Wells Fargo was ordered to re-hire and pay $5.4 million to a whistleblower who was fired after calling the ethics hotline to report suspected fraud. Wells Fargo faces multiple lawsuits from employees who say they protested sales misconduct. The bank said in a filing that it also faces state law whistleblower actions filed with the Labor Department alleging retaliation.

The information is here.

Social Media as a Weapon to Harass Women Academics

George Veletsianos and Jaigris Hodson
Inside Higher Ed
Originally published May 29, 2018

Here is an excerpt:

Before beginning our inquiry, we assumed that the people who responded to our interview requests would be women who studied video games or gender issues, as prior literature had suggested they would be more likely to face harassment. But we quickly discovered that women are harassed when writing about a wide range of topics, including but not limited to: feminism, leadership, science, education, history, religion, race, politics, immigration, art, sociology and technology broadly conceived. The literature even identifies choice of research method as a topic that attracts misogynistic commentary.

So who exactly is at risk of harassment? They form a long list: women scholars who challenge the status quo; women who have an opinion that they are willing to express publicly; women who raise concerns about power; women of all body types and shapes. Put succinctly, people may be targeted for a range of reasons, but women in particular are harassed partly because they happen to be women who dare to be public online. Our respondents reported that they are harassed because they are women. Because they are women, they become targets.

At this point, if you are a woman reading this, you might be nodding your head, or you might feel frustrated that we are pointing out something so incredibly obvious. We might as well point out that rain is wet. But unfortunately, for many people who have not experienced the reality of being a woman online, this fact is still not obvious, is minimized, or is otherwise overlooked. To be clear, there is a gendered element to how both higher education institutions and technology companies handle this issue.

The article is here.

Wednesday, June 20, 2018

Can a machine be ethical? Why teaching AI ethics is a minefield.

Scotty Hendricks
Bigthink.com
Originally published May 31, 2018

Here is an excerpt:

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them.

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains. 

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

The article is here.

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Tuesday, June 19, 2018

British Public Fears the Day When "Computer Says No"

Jasper Hamill
The Metro
Originally published May 31, 2018

Governments and tech companies risk a popular backlash against artificial intelligence (AI) unless they open up about how it will be used, according to a new report.

A poll conducted for the Royal Society of Arts (RSA) revealed widespread concern that AI will create a ‘Computer Says No’ culture, in which crucial decisions are made automatically without consideration of individual circumstances.

If the public feels ‘victimised or disempowered’ by intelligent machines, they may resist the introduction of new technologies, even if it holds back progress which could benefit them, the report warned.

Fear of inflexible and unfeeling automatic decision-making was a greater concern than robots taking humans’ jobs among those taking part in a survey by pollsters YouGov for the RSA.

The information is here.

Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas

Dries H. Bostyn, Sybren Sevenhant, and Arne Roets
Psychological Science 
First Published May 9, 2018

Abstract

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

The research is here.

Monday, June 18, 2018

Sam Harris and the Myth of Perfectly Rational Thought

Robert Wright
www.wired.com
Originally posted March 17, 2018

Here are several excerpts:

This is attribution error working as designed. It sustains your conviction that, though your team may do bad things, it’s only the other team that’s actually bad. Your badness is “situational,” theirs is “dispositional.”

(cut)

Another cognitive bias—probably the most famous—is confirmation bias, the tendency to embrace, perhaps uncritically, evidence that supports your side of an argument and to either not notice, reject, or forget evidence that undermines it. This bias can assume various forms, and one was exhibited by Harris in his exchange with Ezra Klein over political scientist Charles Murray’s controversial views on race and IQ.

(cut)

Most of these examples of tribal thinking are pretty pedestrian—the kinds of biases we all exhibit, usually with less than catastrophic results. Still, it is these and other such pedestrian distortions of thought and perception that drive America’s political polarization today.

For example: How different is what Harris said about Buzzfeed from Donald Trump talking about “fake news CNN”? It’s certainly different in degree. But is it different in kind? I would submit that it’s not.

When a society is healthy, it is saved from all this by robust communication. Individual people still embrace or reject evidence too hastily, still apportion blame tribally, but civil contact with people of different perspectives can keep the resulting distortions within bounds. There is enough constructive cross-tribal communication—and enough agreement on what the credible sources of information are—to preserve some overlap of, and some fruitful interaction between, world views.

The article is here.

Groundhog Day for Medical Artificial Intelligence

Alex John London
The Hastings Report
Originally published May 26, 2018

Abstract

Following a boom in investment and overinflated expectations in the 1980s, artificial intelligence entered a period of retrenchment known as the “AI winter.” With advances in the field of machine learning and the availability of large datasets for training various types of artificial neural networks, AI is in another cycle of halcyon days. Although medicine is particularly recalcitrant to change, applications of AI in health care have professionals in fields like radiology worried about the future of their careers and have the public tittering about the prospect of soulless machines making life‐and‐death decisions. Medicine thus appears to be at an inflection point—a kind of Groundhog Day on which either AI will bring a springtime of improved diagnostic and predictive practices or the shadow of public and professional fear will lead to six more metaphorical weeks of winter in medical AI.

The brief perspective is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.