Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Forecasting. Show all posts
Showing posts with label Forecasting. Show all posts

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Friday, February 7, 2020

People Who Second-Guess Themselves Make Worse Decisions

Christopher Ingraham
The Washington Post
Originally posted 9 Jan 20

Here is an excerpt:

The researchers specifically wanted to know whether the revisions were more accurate than the originals.

In theory, there are a lot of reasons to believe this might be the case. A person would presumably revise a prediction after obtaining new information, such as an analyst’s match forecast or a team roster change.

In practice, however, the opposite was true: Revised forecasts accurately predicted the final match score 7.7 percent of the time. But the unaltered forecasts were correct 9.3 percent of the time.

In other words, revised forecasts were about 17 percent less accurate than those that had never changed.

(cut)

So where did the second-guessers go wrong? For starters, the researchers controlled for match-to-match and player-to-player variation — it isn’t likely the case, in other words, that matches receiving more revisions were more difficult to predict, or that bad guessers were more likely to revise their forecasts.

The researchers found that revisions were more likely to go awry when forecasters dialed up the scores — by going, say, from predicting a 2-1 final score to 3-2. Indeed, across the data set, the bettors systematically underestimated the likelihood of a 0-0 draw: an outcome anticipated 1.5 percent of the time that actually occurs in 8.4 percent of matches.

The info is here.

Wednesday, July 13, 2016

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures

By Mitch Smith
The New York Times
Originally published June 23, 2016

Here is an excerpt:

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

Company officials say the algorithm’s results are backed by research, but they are tight-lipped about its details. They do acknowledge that men and women receive different assessments, as do juveniles, but the factors considered and the weight given to each are kept secret.

“The key to our product is the algorithms, and they’re proprietary,” said Jeffrey Harmon, Northpointe’s general manager. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business. It’s not about looking at the algorithms. It’s about looking at the outcomes.”

The article is here.