Originally posted May 13, 2018
Here is an excerpt:
But excitement about the software soon turned to comprehending the ethical minefield it created. Google’s initial demo gave no indication that the person on the other end of the phone would be alerted that they were talking to a robot. The software even had human-like quirks built into it, stopping to say “um” and “mm-hmm”, a quality designed to seem cute but that ended up appearing more deceptive.
Some found the whole idea that a person should have to go through an artificial conversation with a robot somewhat demeaning; insulting even.
After a day of criticism, Google attempted to play down some of the concerns. It said the technology had no fixed release date, would take into account people’s concerns and promised to ensure that the software identified itself as such at the start of every phone call.
But the fact that it did not do this immediately was not a promising sign. The last two years of massive data breaches, evidence of Russian propaganda campaigns on social media and privacy failures have proven what should always have been obvious: that the internet has as much power to do harm as good. Every frontier technology now needs to be built with at least some level of paranoia; some person asking: “How could this be abused?”
The information is here.