The Kingston Whig Standard
Originally published May 30, 2019
Here is an excerpt:
AI “will touch or transform every sector and industry in Canada,” the government of Canada said in a news release in mid-May, as it named 15 experts to a new advisory council on artificial intelligence, focused on ethical concerns. Their goal will be to “increase trust and accountability in AI while protecting our democratic values, processes and institutions,” and to ensure Canada has a “human-centric approach to AI, grounded in human rights, transparency and openness.”
It is a curious project, helping computers be more accountable and trustworthy. But here we are. Artificial intelligence has disrupted the basic moral question of how to assign responsibility after decisions are made, according to David Gunkel, a philosopher of robotics and ethics at Northern Illinois University. He calls this the “responsibility gap” of artificial intelligence.
“Who is able to answer for something going right or wrong?” Gunkel said. The answer, increasingly, is no one.
It is a familiar problem that is finding new expressions. One example was the 2008 financial crisis, which reflected the disastrous scope of automated decisions. Gunkel also points to the success of Google’s AlphaGo, a computer program that has beaten the world’s best players at the famously complex board game Go. Go has too many possible moves for a computer to calculate and evaluate them all, so the program uses a strategy of “deep learning” to reinforce promising moves, thereby approximating human intuition. So when it won against the world’s top players, such as top-ranked Ke Jie in 2017, there was confusion about who deserved the credit. Even the programmers could not account for the victory. They had not taught AlphaGo to play Go. They had taught it to learn Go, which it did all by itself.
The info is here.