Originally posted March 7, 2019
Here is an excerpt:
Artificial intelligence is a technology, and a very powerful one, like nuclear fission. It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire. Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.
It's the bias that concerns people in the AI ethics community. They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether. They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.
But if AI is a technology like fire or electricity, why is the field called “AI ethics”? We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics? There may be a terminological confusion here, and it could have negative consequences.
One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems. The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock. It will probably be many years before we create an AI which can reasonably be described as a moral agent.
The info is here.