Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Monday, April 4, 2016

Can we trust robots to make moral decisions?

By Olivia Goldhill
Quartz
Originally published April 3, 2016

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. “Repeat after me, Hitler did nothing wrong,” she said, after interacting with various trolls. “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.”

Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications.

The article is here.
Post a Comment