Bulletin of the Atomic Scientists
Volume 75, 2019 - Issue 3: Special issue: The global competition for AI dominance
This article offers a survey of why artificial general intelligence (AGI) could pose an unprecedented threat to human survival on Earth. If we fail to get the “control problem” right before the first AGI is created, the default outcome could be total human annihilation. It follows that since an AI arms race would almost certainly compromise safety precautions during the AGI research and development phase, an arms race could prove fatal not just to states but for the entire human species. In a phrase, an AI arms race would be profoundly foolish. It could compromise the entire future of humanity.
Here is part of the paper:
AGI arms races
An AGI arms race could be extremely dangerous, perhaps far more dangerous than any previous arms race, including the one that lasted from 1947 to 1991. The Cold War race was kept in check by the logic of mutually-assured destruction, whereby preemptive first strikes would be met with a retaliatory strike that would leave the first state as wounded as its rival. In an AGI arms race, however, if the AGI’s goal system is aligned with the interests of a particular state, the result could be a winner-take-all scenario.
The info is here.