Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, November 24, 2018

Establishing an AI code of ethics will be harder than people think

Karen Hao
www.technologyreview.com
Originally posted October 21, 2018

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

The info is here.