Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 23, 2018

Mining the uncertain character gap

Byron Williams
Winston-Salem Journal
Originally posted May 26, 2018

What is moral character? That is the open-ended question that has remained so since human beings discovered the value of critical thinking. Individuals like Aristotle and Confucius have wrestled with it; others such as Abraham Lincoln and Mahatma Gandhi sought to live out this perfect ideal in a rather imperfect way.

Wake Forest University philosophy professor Christian B. Miller grapples with this concept in his new book, “The Character Gap: How Good Are We?”

Utilizing empirical data from psychological research, Miller illustrates how humans can become better people. The difference between our virtues and vices may simply hinge on whether we can get away with it.

Miller offers a thesis that suggests our internal “character gap” may be the distance between the unrealistic virtue we hold for our personal behavior and reality, the way we see ourselves versus how others see us. Moral character is our philosophical DNA comprised of virtues and vices.

The information is here.

Friday, June 22, 2018

About Half of Americans Say U.S. Moral Values Are 'Poor’

Justin McCarthy
Gallup.com
Originally published June 1, 2018

Forty-nine percent of Americans say the state of moral values in the U.S. is "poor" -- the highest percentage in Gallup's trend on this measure since its inception in 2002. Meanwhile, 37% of U.S. adults say moral values are "only fair," and 14% say they are "excellent" or "good."


Americans have always viewed the state of U.S. morals more negatively than positively. But the latest figures are the worst to date, with a record-high 49% rating values as poor and a record-tying-low 14% rating them as excellent or good.

In earlier polls on the measure, Americans were about as likely to rate the country's moral standing as only fair as they were to say it was poor. But in 10 of the past 12 annual polls since 2007, Americans have been decidedly more likely to rate it as poor.

The information is here.

Should Economists Make Moral Judgments?

Jacek Rostowski
Project Syndicate
Originally published May 25, 2018

Here is an excerpt:

But now, for the first time in many decades, economists must consider the moral implications of giving good advice to bad people. They are no longer exempt from the moral quandaries that many other professionals must face – a classic example being the engineers who design missiles or other weapons systems.

The new moral dilemma facing economists is perhaps most stark within international financial institutions (IFIs) such as the International Monetary Fund, the World Bank, and the World Trade Organization, where economic mandarins with significant influence over public policy earn their living.

After the fall of Soviet-style communism, the IFIs admitted Russia and the other former Soviet republics (as well as China) on the assumption that they were each on a path to embracing democracy and a rules-based market economy. But now that democratic backsliding is widespread, economists need to ask if what is good for authoritarian states is also good for humanity. This question is particularly pertinent with respect to China and Russia, each of which is large enough to help shift the balance of world power against democracy.

That being the case, it stands to reason that democratic countries should try to limit the influence of authoritarian regimes within the IFIs – if not exclude them altogether in extreme cases. But it is worth distinguishing between two kinds of international institution in this context: rule-setting bodies that make it easier for countries with hostile ideological or national interests to co-exist; and organizations that create a strong community of interest, meaning that economic and political benefits for some members “spill over” and are felt more widely.

The article is here.

Thursday, June 21, 2018

Wells Fargo's ethics hotline calls are on the rise

Matt Egan
CNN.com
Originally posted June 19, 2018

A top Wells Fargo (WFC) executive said on Tuesday that employees are increasingly using the bank's confidential hotline to report bad behavior.

"Our volumes increased on our ethics line. We're glad they did. People raised their hand," said Theresa LaPlaca, who leads a conduct office that Wells Fargo created last year.

"That is success for me," LaPlaca said at the ACFE Global Fraud Conference in Las Vegas.

Reassuring Wells Fargo workers to trust the bank's ethics hotline is no easy task. Nearly half a dozen workers told CNNMoney in 2016 that they were fired by Wells Fargo after calling the hotline to try to stop the bank's fake-account problem.

Last year, Wells Fargo was ordered to re-hire and pay $5.4 million to a whistleblower who was fired after calling the ethics hotline to report suspected fraud. Wells Fargo faces multiple lawsuits from employees who say they protested sales misconduct. The bank said in a filing that it also faces state law whistleblower actions filed with the Labor Department alleging retaliation.

The information is here.

Social Media as a Weapon to Harass Women Academics

George Veletsianos and Jaigris Hodson
Inside Higher Ed
Originally published May 29, 2018

Here is an excerpt:

Before beginning our inquiry, we assumed that the people who responded to our interview requests would be women who studied video games or gender issues, as prior literature had suggested they would be more likely to face harassment. But we quickly discovered that women are harassed when writing about a wide range of topics, including but not limited to: feminism, leadership, science, education, history, religion, race, politics, immigration, art, sociology and technology broadly conceived. The literature even identifies choice of research method as a topic that attracts misogynistic commentary.

So who exactly is at risk of harassment? They form a long list: women scholars who challenge the status quo; women who have an opinion that they are willing to express publicly; women who raise concerns about power; women of all body types and shapes. Put succinctly, people may be targeted for a range of reasons, but women in particular are harassed partly because they happen to be women who dare to be public online. Our respondents reported that they are harassed because they are women. Because they are women, they become targets.

At this point, if you are a woman reading this, you might be nodding your head, or you might feel frustrated that we are pointing out something so incredibly obvious. We might as well point out that rain is wet. But unfortunately, for many people who have not experienced the reality of being a woman online, this fact is still not obvious, is minimized, or is otherwise overlooked. To be clear, there is a gendered element to how both higher education institutions and technology companies handle this issue.

The article is here.

Wednesday, June 20, 2018

Can a machine be ethical? Why teaching AI ethics is a minefield.

Scotty Hendricks
Bigthink.com
Originally published May 31, 2018

Here is an excerpt:

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them.

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains. 

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

The article is here.

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Tuesday, June 19, 2018

British Public Fears the Day When "Computer Says No"

Jasper Hamill
The Metro
Originally published May 31, 2018

Governments and tech companies risk a popular backlash against artificial intelligence (AI) unless they open up about how it will be used, according to a new report.

A poll conducted for the Royal Society of Arts (RSA) revealed widespread concern that AI will create a ‘Computer Says No’ culture, in which crucial decisions are made automatically without consideration of individual circumstances.

If the public feels ‘victimised or disempowered’ by intelligent machines, they may resist the introduction of new technologies, even if it holds back progress which could benefit them, the report warned.

Fear of inflexible and unfeeling automatic decision-making was a greater concern than robots taking humans’ jobs among those taking part in a survey by pollsters YouGov for the RSA.

The information is here.

Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas

Dries H. Bostyn, Sybren Sevenhant, and Arne Roets
Psychological Science 
First Published May 9, 2018

Abstract

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

The research is here.