Do androids dream of electric Kant?
By Emma Wooliacott
Originally published May 6, 2014
Here are two excerpts:
But as AJung Moon of the University of British Columbia, points out, "It's really hard to create a robot that would have the same sense of moral agency as a human being. Part of the reason is that people can't even agree on what is the right thing to do. What would be the benchmark?"
Her latest research, led by colleague Ergun Calisgan, takes a pragmatic approach to the problem by examining a robot tasked with delivering a package in a building with only one small lift. How should it act? Should it push ahead of a waiting human? What if its task is urgent? What if the person waiting is in a wheelchair?
Indeed, professor Ronald Craig Arkin of the Georgia Institute of Technology has proposed an "ethical adaptor", designed give a military robot what he describes as a sense of guilt. It racks up, according to a pre-determined formula, as the robot perceives after an event that it has violated the rules of engagement - perhaps by killing a civilian in error - or if it is criticised by its own side. Once its guilt reaches a certain pre-determined level, the robot is denied permission to fire.
The entire article is here.