Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Drones. Show all posts
Showing posts with label Drones. Show all posts

Tuesday, August 7, 2018

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Monday, May 22, 2017

The morality of technology

Rahul Matthan
Live Mint
Originally published May 3, 2017

Here is an excerpt:

Another example of the two sides of technology is drones—a modern technology that is already being deployed widely—from the delivery of groceries to ensuring that life saving equipment reaches first responders in high density urban areas. But for every beneficent use of drone tech, there are an equal number of dubious uses that challenge our ethical boundaries. Foremost among these is development of AI-powered killer drones—autonomous flying weapons intelligent enough to accurately distinguish between friend and foe and then, autonomously, take the decision to execute a kill.

This duality is inherent in all of tech. But just because technology can be used for evil, that should not, of itself, be a reason not to use it. We need new technology to better ourselves and the world we live in—and we need to be wise about how we apply it so that our use remains consistent with the basic morality inherent in modern society. This implies that each time we make a technological breakthrough we must assess afresh, the contexts within which they could present themselves and the uses to which they should (and should not) be put. If required, we must take the trouble to re-draw our moral boundaries, establishing the limits within which they must be constrained.

The article is here.

Friday, August 1, 2014

Moral Hazards & Legal Conundrums of Our Robot-Filled Future

By Greg Miller
Wired
Originally posted July 17, 2014

The robots are coming, and they’re getting smarter. They’re evolving from single-task devices like Roomba and its floor-mopping, pool-cleaning cousins into machines that can make their own decisions and autonomously navigate public spaces. Thanks to artificial intelligence, machines are getting better at understanding our speech and detecting and reflecting our emotions. In many ways, they’re becoming more like us.

Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine.

The entire article is here.

Thursday, June 5, 2014

Drone Ethics is Easy

By Mike LaBossiere
Talking Philosophy
Originally published on May 16, 2014

When a new technology emerges it is not uncommon for people to claim that the technology is outpacing ethics and law. Because of the nature of law (at least in countries like the United States) it is very easy for technology to outpace the law. However, it is rather difficult for technology to truly outpace ethics.

(cut)

It is, however, worth considering the possibility that a new technology could “break” an ethical theory by being such that the theory could not expand to cover the technology. However, this would show that the theory was inadequate rather than showing that the technology outpaced ethics.

Another reason that technology would have a hard time outpacing ethics is that an ethical argument by analogy can be applied to a new technology. That is, if the technology is like something that already exists and has been discussed in the context of ethics, the ethical discussion of the pre-existing thing can be applied to the new technology. This is, obviously enough, analogous to using ethical analogies to apply ethics to different specific situations (such as a specific act of cheating in a relationship).

The entire article is here.

Wednesday, March 27, 2013

Drones, Ethics and the Armchair Soldier

By John Kaag
The New York Times - Opinionator
Originally published on March 17, 2013

Here are some excerpts:

Ten years later, I’m a philosopher writing a book about the ethics of drone warfare. Some days I fear that I will have either to give up the book or to give up philosophy. I worry that I can’t have both. Some of my colleagues would like me to provide decision procedures for military planners and soldiers, the type that could guide them, automatically, unthinkingly, mechanically, to the right decision about drone use. I try to tell them that this is not how ethics, or philosophy, or humans, work.

I try to tell them that the difference between humans and robots is precisely the ability to think and reflect, in Immanuel Kant’s words, to set and pursue ends for themselves. And these ends cannot be set beforehand in some hard and fast way — even if Kant sometimes thought they could.

What disturbs me is the idea that a book about the moral hazard of military technologies should be written as if it was going to be read by robots: input decision procedure, output decision and correlated action. I know that effective military operations have traditionally been based on the chain of command and that this looks a little like the command and control structure of robots. When someone is shooting at you, I can only imagine that you need to follow orders mechanically. The heat of battle is neither the time nor the place for cool ethical reflection.

Warfare, unlike philosophy, could never be conducted from an armchair. Until now. For the first time in history, some soldiers have this in common with philosophers: they can do their jobs sitting down. They now have what I’ve always enjoyed, namely “leisure,” in the Hobbesian sense of the word, meaning they are not constantly afraid of being killed. Hobbes thought that there are certain not-so-obvious perks to leisure (not being killed is the obvious one). For one, you get to think. This is what he means when he says that “leisure is the mother of philosophy.” I tend to agree with Hobbes: only those who enjoy a certain amount of leisure can be philosophers.

The entire article is here.