Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Autonomous Vehicles. Show all posts
Showing posts with label Autonomous Vehicles. Show all posts

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).
https://doi.org/10.1038/s41562-019-0762-8

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.

Monday, April 6, 2020

Life and death decisions of autonomous vehicles

Y. E. Bigman and K. Gray
Nature
Originally published 4 May 20

How should self-driving cars make decisions when human lives hang in the balance? The Moral Machine experiment (MME) suggests that people want autonomous vehicles (AVs) to treat different human lives unequally, preferentially killing some people (for example, men, the old and the poor) over others (for example, women, the young and the rich). Our results challenge this idea, revealing that this apparent preference for inequality is driven by the specific ‘trolley-type’ paradigm used by the MME. Multiple studies with a revised paradigm reveal that people overwhelmingly want autonomous vehicles to treat different human lives equally in life and death situations, ignoring gender, age and status—a preference consistent with a general desire for equality.

The large-scale adoption of autonomous vehicles raises ethical challenges because autonomous vehicles may sometimes have to decide between killing one person or another. The MME seeks to reveal people’s preferences in these situations and many of these revealed preferences, such as ‘save more people over fewer’ and ‘kill by inaction over action’ are consistent with preferences documented in previous research.

However, the MME also concludes that people want autonomous vehicles to make decisions about who to kill on the basis of personal features, including physical fitness, age, status and gender (for example, saving women and killing men). This conclusion contradicts well-documented ethical preferences for equal treatment across demographic features and identities, a preference enshrined in the US Constitution, the United Nations Universal Declaration of Human Rights and in the Ethical Guideline 9 of the German Ethics Code for Automated and Connected Driving.

The info is here.

Tuesday, May 14, 2019

Who Should Decide How Algorithms Decide?

Mark Esposito, Terence Tse, Joshua Entsminger, and Aurelie Jean
Project-Syndicate
Originally published April 17, 2019

Here is an excerpt:

Consider the following scenario: a car from China has different factory standards than a car from the US, but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver has different ethical preferences than the driver of the US car, which system should prevail?

Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries. A Chinese-made car, for example, might have access to social-scoring data, allowing its decision-making algorithm to incorporate additional inputs that are unavailable to US carmakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?

Clearly, before AVs take to the road en masse, we will need to establish where responsibility for algorithmic decision-making lies, be it with municipal authorities, national governments, or multilateral institutions. More than that, we will need new frameworks for governing this intersection of business and the state. At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decision-making algorithms.

The info is here.

Tuesday, July 10, 2018

Google to disclose ethical framework on use of AI

Richard Walters
The Financial Times
Originally published June 3, 2018

Here is an excerpt:

However, Google already uses AI in other ways that have drawn criticism, leading experts in the field and consumer activists to call on it to set far more stringent ethical guidelines that go well beyond not working with the military.

Stuart Russell, a professor of AI at the University of California, Berkeley, pointed to the company’s image search feature as an example of a widely used service that perpetuates preconceptions about the world based on the data in Google’s search index. For instance, a search for “CEOs” returns almost all white faces, he said.

“Google has a particular responsibility in this area because the output of its algorithms is so pervasive in the online world,” he said. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”

The information is here.

Friday, April 13, 2018

The Farmbots Are Coming

Matt Jancer
www.wired.com
Originally published March 9, 2018

The first fully autonomous ground vehicles hitting the market aren’t cars or delivery trucks—they’re ­robo­-farmhands. The Dot Power Platform is a prime example of an explosion in advanced agricultural technology, which Goldman Sachs predicts will raise crop yields 70 percent by 2050. But Dot isn’t just a tractor that can drive without a human for backup. It’s the Transformer of ag-bots, capable of performing 100-plus jobs, from hay baler and seeder to rock picker and manure spreader, via an ­arsenal of tool modules. And though the hulking machine can carry 40,000 pounds, it navigates fields with balletic precision.

The information is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Monday, August 7, 2017

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Saturday, July 29, 2017

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts

Devin Coldewey
Tech Crunch
Originally posted July 11, 2017

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lion’s share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The fund’s focuses are threefold:

  • Media and information quality – looking at how to understand and control the effects of autonomous information systems and “influential algorithms” like Facebook’s news feed.
  • Social and criminal justice – perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
  • Autonomous cars – although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet — great potential for both advancement and abuse.

Wednesday, July 26, 2017

Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios

Leon R. Sütfeld, Richard Gast, Peter König and Gordon Pipa
Front. Behav. Neurosci., 05 July 2017

Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

The article is here.

Tuesday, July 18, 2017

Human decisions in moral dilemmas are largely described by Utilitarianism

Anja Faulhaber, Anke Dittmer, Felix Blind, and others

Abstract

Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in a utilitarian way, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past years; especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as the driver in a virtual reality environment. Participants had to make decisions between two discrete options: driving on one of two lanes where different obstacles came into view. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, we tested the influence of a sidewalk as a potential safe harbor and a condition implicating a self-sacrifice. Results showed that subjects, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence of the other variables. Our findings support that people’s behavior is in line with the utilitarian approach to moral decision making. This may serve as a guideline for the
implementation of moral decisions in ADVs.

The article is here.

Thursday, March 16, 2017

Mercedes-Benz’s Self-Driving Cars Would Choose Passenger Lives Over Bystanders

David Z. Morris
Fortune
Originally published Oct 15, 2016

In comments published last week by Car and Driver, Mercedes-Benz executive Christoph von Hugo said that the carmaker’s future autonomous cars will save the car’s driver and passengers, even if that means sacrificing the lives of pedestrians, in a situation where those are the only two options.

“If you know you can save at least one person, at least save that one,” von Hugo said at the Paris Motor Show. “Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”

This doesn't mean Mercedes' robotic cars will neglect the safety of bystanders. Von Hugo, who is the carmaker’s manager of driver assistance and safety systems, is addressing the so-called “Trolley Problem”—an ethical thought experiment that applies to human drivers just as much as artificial intelligences.

The article is here.

The big moral dilemma facing self-driving cars

Steven Overly
The Washington Post
Originally published February 27, 2017

How many people could self-driving cars kill before we would no longer tolerate them?

This once-hypothetical question is now taking on greater urgency, particularly among policymakers in Washington. The promise of autonomous vehicles is that they will make our roads safer and more efficient, but no technology is without its shortcomings and unintended consequences — in this instance, potentially fatal consequences.

“What if we can build a car that’s 10 times as safe, which means 3,500 people die on the roads each year. Would we accept that?” asks John Hanson, a spokesman for the Toyota Research Institute, which is developing the automaker’s self-driving technology.

“A lot of people say if, ‘I could save one life it would be worth it.’ But in a practical manner, though, we don’t think that would be acceptable,” Hanson added.

The article is here.

Monday, January 30, 2017

Finding trust and understanding in autonomous technologies

David Danks
The Conversation
Originally published December 30, 2016

Here is an excerpt:

Autonomous technologies are rapidly spreading beyond the transportation sector, into health care, advanced cyberdefense and even autonomous weapons. In 2017, we’ll have to decide whether we can trust these technologies. That’s going to be much harder than we might expect.

Trust is complex and varied, but also a key part of our lives. We often trust technology based on predictability: I trust something if I know what it will do in a particular situation, even if I don’t know why. For example, I trust my computer because I know how it will function, including when it will break down. I stop trusting if it starts to behave differently or surprisingly.

In contrast, my trust in my wife is based on understanding her beliefs, values and personality. More generally, interpersonal trust does not involve knowing exactly what the other person will do – my wife certainly surprises me sometimes! – but rather why they act as they do. And of course, we can trust someone (or something) in both ways, if we know both what they will do and why.

I have been exploring possible bases for our trust in self-driving cars and other autonomous technology from both ethical and psychological perspectives. These are devices, so predictability might seem like the key. Because of their autonomy, however, we need to consider the importance and value – and the challenge – of learning to trust them in the way we trust other human beings.

The article is here.

Thursday, December 8, 2016

Morality in transportation

Jeffrey C. Peters
The Conversation by way of Salon
Originally posted November 19, 2016

A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle — and cyclist and pedestrian — proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.

Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year — nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.

There’s a key problem, though: Computers like rules — solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?

The article is here.

Thursday, September 15, 2016

World's First Self-Driving Taxis Debut in Singapore

Annabelle Liang and Dee-Ann Durbin
Associated Press
August 24, 2016

Here is an excerpt:

The service will start small — six cars now, growing to a dozen by the end of the year. The ultimate goal, say nuTonomy officials, is to have a fully self-driving taxi fleet in Singapore by 2018, which will help sharply cut the number of cars on Singapore's congested roads. Eventually, the model could be adopted in cities around the world, nuTonomy says.

For now, the taxis only will run in a 2.5-square-mile business and residential district called "one-north," and pick-ups and drop-offs will be limited to specified locations. And riders must have an invitation from nuTonomy to use the service. The company says dozens have signed up for the launch, and it plans to expand that list to thousands of people within a few months.

The article is here.