The Wall Street Journal
Originally published July 9, 2017
Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.
As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.
Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.
One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.
Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?
We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”
“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”
The article is here.