Forbes Insight Team
Originally posted March 27, 2019
As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.
AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.
Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.
A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.
“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”
The info is here.