"Living a fully ethical life involves doing the most good we can." - Peter Singer
"Common sense is not so common." - Voltaire
“There are two ways to be fooled. One is to believe what isn't true; the other is to refuse to believe what is true.” ― Søren Kierkegaard

Wednesday, February 22, 2017

It's time for some messy, democratic discussions about the future of AI

Jack Stilgoe and Andrew Maynard
The Guardian
Originally posted February 1, 2017

Here is an excerpt:

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

But avoiding awkward public conversations helps nobody. Scientists are more inclined to guess at what the public are worried about than to ask them, which can lead to some serious blind spots – not necessarily in scientific understanding (although this too can occur), but in the direction and nature of research and development.

The article is here.
Post a Comment