Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 9, 2017

Morality and Machines

Robert Fry
Originally published October 23, 2017

Here is an excerpt:

It is axiomatic that robots are more mechanically efficient than humans; equally they are not burdened with a sense of self-preservation, nor is their judgment clouded by fear or hysteria. But it is that very human fallibility that requires the intervention of the defining human characteristic—a moral sense that separates right from wrong—and explains why the ethical implications of the autonomous battlefield are so much more contentious than the physical consequences. Indeed, an open letter in 2015 seeking to separate AI from military application included the signatures of such luminaries as Elon Musk, Steve Wozniak, Stephen Hawking and Noam Chomsky. For the first time, therefore, human agency may be necessary on the battlefield not to take the vital tactical decisions but to weigh the vital moral ones.

So, who will accept these new responsibilities and how will they be prepared for the task? The first point to make is that none of this is an immediate prospect and it may be that AI becomes such a ubiquitous and beneficial feature of other fields of human endeavour that we will no longer fear its application in warfare. It may also be that morality will co-evolve with technology. Either way, the traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for physical endurance. Nor will the quintessential commander’s skill of judging tactical advantage have much value when cognitive computing will instantaneously integrate sensor information. The key human input will be to make the judgments that link moral responsibility to legal consequence.

The article is here.