Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cars. Show all posts
Showing posts with label Cars. Show all posts

Monday, August 29, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

By Jacob Brogan
Future Tense
Originally published August 11, 2016

Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?

It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way. You’ll have to decide whether a vehicle should choose its passengers or people in an intersection. Others will present two differently composed groups of pedestrians—say, a handful of female doctors or a collection of besuited men—and ask which an empty car should slam into. Further complications—including the presence of animals and details about whether the pedestrians have the right of way—sometimes further muddle the question.

Monday, August 22, 2016

Autonomous Vehicles Might Develop Superior Moral Judgment

John Martellaro
The Mac Observer
Originally published August 10, 2016

Here is an excerpt:

One of the virtues (or drawbacks, depending on one’s point of view) of a morality engine is that the decisions an autonomous vehicle makes can only be traced back only to software. That helps to absolve a car maker’s employees from direct liability when it comes life and death decisions by machine. That certainly seems to be an emerging trend in technology. The benefit is obvious. If a morality engine makes the right decision, by human standards, 99,995 times out of 100,000, the case for extreme damages due to systematic failure causing death is weak. Technology and society can move forward.

The article is here.

Friday, November 13, 2015

Why Self-Driving Cars Must Be Programmed to Kill

Emerging Technology From the arXiv
MIT Technology Review
Originally published October 22, 2015

Here is an excerpt:

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.

The entire article is here.

Thursday, October 1, 2015

Ethics Won't Be A Big Problem For Driverless Cars

By Adam Ozimek
Forbes Magazine
Originally posted September 13, 2015

Skeptics of driverless cars have a variety of criticisms, from technical to demand based, but perhaps the most curious is the supposed ethical trolley problem it creates. While the question of how driverless cars will behave in ethical situations is interesting and will ultimately have to be answered by programmers, critics greatly exaggerate its importance. In addition, they assume that driverless cars have to be perfect rather than just better.

(cut)

Patrick Lin asks “Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child?” But seriously, how often do drivers actually make this decision? Accidents that provide this choice seem pretty rare. And if I am wrong and we’re actually living in a world rife with trolley problems for drivers, it seems likely that bad human driving and foresight probably creates many of them. Having driverless cars that don’t get distracted, don’t speed dangerously, and can see 360 degrees will make it less likely that split second life and death choices need to be made.

The entire article is here.

Tuesday, August 4, 2015

Killer Robots: The Soldiers that Never Sleep

By Simon Parker
BBC.com
Originally published July 16, 2015

Here is an excerpt:

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians. How do you, as a software engineer, construct a set of rules for such a device to follow in these scenarios? Is it possible to programme a device to think for itself? For many, the simplest solution is to sidestep these questions by simply requiring any automated machine that puts human life in danger to allow a human override. This is the reason that landmines were banned by the Ottawa treaty in 1997. They were, in the most basic way imaginable, autonomous weapons that would explode whoever stepped on them.

In this context the provision of human overrides make sense. It seems obvious, for example, that pilots should have full control over a plane's autopilot system. But the 2015 Germanwings disaster, when co-pilot Andreas Lubitz deliberately crashed the plane into the French Alps, killing all 150 passengers, complicates the matter. Perhaps, in fact, no pilot should be allowed to override a computer – at least, not if it means they are able to fly a plane into a mountainside?

“There are multiple approaches to trying to develop ethical machines, and many challenges,” explains Gary Marcus, cognitive scientist at NYU and CEO and Founder of Geometric Intelligence. “We could try to pre-program everything in advance, but that’s not trivial – how for example do you program in a notion like ‘fairness’ or ‘harm’?” There is another dimension to the problem aside from ambiguous definitions. For example, any set of rules issued to an automated soldier will surely be either too abstract to be properly computable, or too specific to cover all situations.

The entire article is here.

Tuesday, December 23, 2014

Self-Driving Cars: Safer, but What of Their Morals

By Justin Pritchard
Associated Press
Originally posted November 19, 2014

Here is an excerpt:

"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."

What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions and publicly reveal the answers they reach.

The entire article is here.

Friday, September 5, 2014

Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings

By Patrick Lin
Wired
Originally posted August 18, 2014

Here is an excerpt:

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

The entire story is here.

Sunday, June 1, 2014

The Ethics of Automated Cars

By Patrick Lin
Wired Magazine
Originally published May 6, 2014

Here is an except:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

The entire story is here.