Originally posted February 13, 2017
Here is an excerpt:
The largest tech companies – Apple, Amazon, Google, IBM, Microsoft and Facebook – have already committed to creating new standards to guide the development of artificial intelligence. Likewise, a recent EU Parliament investigation recommended the development of an advisory code for robotic engineers, as well as ‘electronic personhood’ for the most sophisticated robots to ensure their behaviour is captured by legal systems.
Other ideas include regulatory ‘sandboxes’ that would give AI developers more freedom to experiment but under the close supervision of the authorities, and ‘software deposits’ for private code that would allow consumer rights organisations and government inspectors the opportunity to audit algorithms behind closed doors. Darpa recently kicked off a new programme called Explainable AI (XAI), which aims to create machine learning systems that can explain the steps they take to arrive at a decision, as well as unpack the strengths and weaknesses of their conclusions.
There have even been calls to instate a Hippocratic Oath for AI developers. This would have the advantage of going straight to the source of potential issues – the people who write the code – rather than relying on the resources, skills and time of external enforcers. An oath might also help to concentrate the minds of the programming community as a whole in getting to grips with the above dilemmas. Inspiration can be taken from the way the IEEE, a technical professional association in the US, has begun drafting a framework for the ‘ethically aligned design’ of AI.
The article is here.