Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Autonomous Weapons Systems. Show all posts
Showing posts with label Autonomous Weapons Systems. Show all posts

Monday, September 28, 2020

Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be?

Sébastien Robli
nbcnews.com
Originally published 31 Aug 20

Here is an excerpt:

The AlphaDogfight simulation on Aug. 20 was an important milestone for AI and its potential military uses. While this achievement shows that AI can master increasingly difficult combat skills at warp speed, the Pentagon’s futurists still must remain mindful of its limitations and risks — both because AI remains long away from eclipsing the human mind in many critical decision-making roles, despite what the likes of Elon Musk have warned, and to make sure we don’t race ahead of ourselves and inadvertently leave the military exposed to new threats.

That’s not to minimize this latest development. Within the scope of the simulation, the AI pilot exceeded human limitations in the tournament: It was able to consistently execute accurate shots in very short timeframes; consistently push the airframe’s tolerance of the force of gravity to its maximum potential without going beyond that; and remain unaffected by the crushing pressure exerted by violent maneuvers the way a human pilot would.

All the more remarkable, Heron’s AI pilot was self-taught using deep reinforcement learning, a method in which an AI runs a combat simulation over and over again and is “rewarded” for rapidly successful behaviors and “punished” for failure. 


I emboldened the last sentence because of its importance.

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.

Tuesday, August 7, 2018

Thousands of leading AI researchers sign pledge against killer robots

Ian Sample
The Guardian
Originally posted July 18, 2018

Here is an excerpt:

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

The info is here.

Thursday, December 21, 2017

An AI That Can Build AI

Dom Galeon and Kristin Houser
Futurism.com
Originally published on December 1, 2017

Here is an excerpt:

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

The information is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

Friday, September 15, 2017

Robots and morality

The Big Read (which is actually in podcast form)
The Financial Times
Originally posted August 2017

Now our mechanical creations can act independently, what happens when AI goes wrong? Where does moral, ethical and legal responsibility for robots lie — with the manufacturers, the programmers, the users or the robots themselves, asks John Thornhill. And who owns their rights?

Click on the link below to access the 13 minutes podcast.

Podcast is here.

Monday, August 7, 2017

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Monday, August 31, 2015

The Moral Code

By Nayef Al-Rodhan
Foreign Affairs
Originally published August 12, 2015

Here is an excerpt:

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will be deployed in more complex situations that require spontaneous choices. The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The entire article is here.

Tuesday, August 4, 2015

Killer Robots: The Soldiers that Never Sleep

By Simon Parker
BBC.com
Originally published July 16, 2015

Here is an excerpt:

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians. How do you, as a software engineer, construct a set of rules for such a device to follow in these scenarios? Is it possible to programme a device to think for itself? For many, the simplest solution is to sidestep these questions by simply requiring any automated machine that puts human life in danger to allow a human override. This is the reason that landmines were banned by the Ottawa treaty in 1997. They were, in the most basic way imaginable, autonomous weapons that would explode whoever stepped on them.

In this context the provision of human overrides make sense. It seems obvious, for example, that pilots should have full control over a plane's autopilot system. But the 2015 Germanwings disaster, when co-pilot Andreas Lubitz deliberately crashed the plane into the French Alps, killing all 150 passengers, complicates the matter. Perhaps, in fact, no pilot should be allowed to override a computer – at least, not if it means they are able to fly a plane into a mountainside?

“There are multiple approaches to trying to develop ethical machines, and many challenges,” explains Gary Marcus, cognitive scientist at NYU and CEO and Founder of Geometric Intelligence. “We could try to pre-program everything in advance, but that’s not trivial – how for example do you program in a notion like ‘fairness’ or ‘harm’?” There is another dimension to the problem aside from ambiguous definitions. For example, any set of rules issued to an automated soldier will surely be either too abstract to be properly computable, or too specific to cover all situations.

The entire article is here.

Thursday, May 14, 2015

Do Killer Robots Violate Human Rights?

When machines are anthropomorphized, we risk applying a human standard that should not apply to mere tools.

By Patrick Lin
The Atlantic
Originally published April 20, 2015

Here is an excerpt:

What’s objectionable to many about lethal autonomous weapons systems is that, even if the weapons aim only at lawful targets, they seem to violate a basic right to life. This claim is puzzling at first, since killing is so commonplace and permitted in war. If you’re a combatant, you are legally liable to be killed at any time; so it’s unclear that there’s a right to life at all.

But what we mean is that, in armed conflicts, a right to life means a right not to be killed arbitrarily, unaccountably, or otherwise inhumanely. To better understand the claim, a right to life can be thought of as a right to human dignity. Human dignity is arguably more basic than a right to life, which can be more easily forfeited or trumped. For instance, even lawful executions should be humane in civilized society.

The entire article is here.