Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Singularity. Show all posts
Showing posts with label Singularity. Show all posts

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Sunday, July 17, 2016

AI, Transhumanism, Merging with Superintelligence + Singularity Explained

Hosted by Michael Parker
The Antidote: TheLipTV2
Originally published on Mar 2, 2015

Artificial Intelligence, the possibility of merging consciousness with computers, and singularity are discussed in this mind expanding conversation with Dr. Susan Schneider. Are we prepared to face the implications of the success of our own technological innovations? Is the universe teeming with postbiological super Artificial Intelligence? Can silicon based entities bond with carbon based lifeforms? Explore the philosophical questions of superintelligence on the Antidote, hosted by Michael Parker.


Tuesday, May 10, 2016

Where do minds belong?

by Caleb Scharf
Aeon
Originally published March 22, 2016

As a species, we humans are awfully obsessed with the future. We love to speculate about where our evolution is taking us. We try to imagine what our technology will be like decades or centuries from now. And we fantasise about encountering intelligent aliens – generally, ones who are far more advanced than we are. Lately those strands have begun to merge. From the evolution side, a number of futurists are predicting the singularity: a time when computers will soon become powerful enough to simulate human consciousness, or absorb it entirely. In parallel, some visionaries propose that any intelligent life we encounter in the rest of the Universe is more likely to be machine-based, rather than humanoid meat-bags such as ourselves.

These ruminations offer a potential solution to the long-debated Fermi Paradox: the seeming absence of intelligent alien life swarming around us, despite the fact that such life seems possible. If machine intelligence is the inevitable end-point of both technology and biology, then perhaps the aliens are hyper-evolved machines so off-the-charts advanced, so far removed from familiar biological forms, that we wouldn’t recognise them if we saw them. Similarly, we can imagine that interstellar machine communication would be so optimised and well-encrypted as to be indistinguishable from noise. In this view, the seeming absence of intelligent life in the cosmos might be an illusion brought about by our own inadequacies.

The article is here.

Monday, March 10, 2014

Mental lives and fodor’s lot

Susan Schneider interviewed by Richard Marshall
3:AM Magazine
Originally posted February 14, 2014

Here is an excerpt:

3:AM: You make strong claims about thought experiments and find them valuable. Some philosophers like Paul Horwich disagree and find them misleading and useless. How can something imaginary lead to knowledge and enlightenment?

SS: Philosophers face a dilemma. On the one hand, philosophers often theorize about the nature of things, so it is useful to think of what might be the case, as opposed to what happens to be the case. For instance, metaphysicians who consider the nature of the self or person commonly consider cases like teleportation and brain transplants, to see if one’s theory of the self yields a viable result concerning whether one would survive such things. On the other hand, thought experiments can be misused. For instance, it strikes some as extreme to discard an otherwise plausible theory because it runs contrary to our intuitions about a thought experiment, especially if the example is far-fetched and not even compatible with our laws of nature. And there has been a movement in philosophy called “experimental philosophy” which claims that people of different ethnicities, genders, and socioeconomic backgrounds can come to different conclusions about certain thought experiments because of their different backgrounds.

I still employ thought experiments in my work, but I try to bear in mind three things: first, the presence of a thought experiment that pumps intuitions contrary to a theory should not automatically render the theory false. But a thought experiment can speak against a theory in an all-things-considered judgment; this is an approach I’ve employed in debates over laws of nature.

The entire story is here.