Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Driverless Car. Show all posts
Showing posts with label Driverless Car. Show all posts

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

Thursday, July 28, 2016

Driverless Cars: Can There Be a Moral Algorithm?

By Daniel Callahan
The Hastings Center
Originally posted July 5, 2016

Here is an excerpt:

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

The article is here.