By Dylan Matthews
Interview on Vox
Updated August 19, 2014
Here is an excerpt:
The basic argument is simple. At some point, many experts believe that artificial intelligence will advance to a point where it not only exceeds human intelligence, but is capable of expanding its own intelligence, setting off an exponential "intelligence explosion." In theory, these hyper-intelligent machines could be used to serve human ends. They could cure diseases and resolve intractable scientific quandaries. In an extreme case, they could wholly replace human workers, enabling humankind to quit working and live comfortably off the robots' labor.
But the problem is that, Bostrom argues, superintelligent machines will be so much more intelligent than humans that they most likely won't remain tools. They'll become goal-driven actors in their own right, and their goals may not be compatible with those of humans. Indeed, they might not be compatible with the continued existence of humans. Please consult the Terminator franchise for more on how that situation plays out.
The entire article and interview is here.