Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.