Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Superintelligence. Show all posts
Showing posts with label Superintelligence. Show all posts

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Time.com
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.

(cut)

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.


Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Sunday, May 10, 2020

Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly

Pim Haselager & Giulio Mecacci (2020)
AJOB Neuroscience, 11:2, 113-119
DOI: 10.1080/21507740.2020.1740353

Abstract

The human species is combining an increased understanding of our cognitive machinery with the development of a technology that can profoundly influence our lives and our ways of living together. Our sciences enable us to see our strengths and weaknesses, and build technology accordingly. What would future historians think of our current attempts to build increasingly smart systems, the purposes for which we employ them, the almost unstoppable goldrush toward ever more commercially relevant implementations, and the risk of superintelligence? We need a more profound reflection on what our science shows us about ourselves, what our technology allows us to do with that, and what, apparently, we aim to do with those insights and applications. As the smartest species on the planet, we don’t need more intelligence. Since we appear to possess an underdeveloped capacity to act ethically and empathically, we rather require the kind of technology that enables us to act more consistently upon ethical principles. The problem is not to formulate ethical rules, it’s to put them into practice. Cognitive neuroscience and AI provide the knowledge and the tools to develop the moral crutches we so clearly require. Why aren’t we building them? We don’t need superintelligence, we need superethics.

The article is here.

Tuesday, July 16, 2019

The possibility and risks of artificial general intelligence

Phil Torres
Bulletin of the Atomic Scientists 
Volume 75, 2019 - Issue 3: Special issue: The global competition for AI dominance

Abstract

This article offers a survey of why artificial general intelligence (AGI) could pose an unprecedented threat to human survival on Earth. If we fail to get the “control problem” right before the first AGI is created, the default outcome could be total human annihilation. It follows that since an AI arms race would almost certainly compromise safety precautions during the AGI research and development phase, an arms race could prove fatal not just to states but for the entire human species. In a phrase, an AI arms race would be profoundly foolish. It could compromise the entire future of humanity.

Here is part of the paper:

AGI arms races

An AGI arms race could be extremely dangerous, perhaps far more dangerous than any previous arms race, including the one that lasted from 1947 to 1991. The Cold War race was kept in check by the logic of mutually-assured destruction, whereby preemptive first strikes would be met with a retaliatory strike that would leave the first state as wounded as its rival. In an AGI arms race, however, if the AGI’s goal system is aligned with the interests of a particular state, the result could be a winner-take-all scenario.

The info is here.


Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.

Monday, May 21, 2018

A Mathematical Framework for Superintelligent Machines

Daniel J. Buehrer
IEEE Access

Here is an excerpt:

Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.

Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy. 

From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.

The information is here.

A ‘Master Algorithm’ may emerge sooner than you think

Tristan Greene
thenextweb.com
Originally posted April 18, 2018

Here is an excerpt:

It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.

Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.
 Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”

The information is here.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Thursday, November 30, 2017

Why We Should Be Concerned About Artificial Superintelligence

Matthew Graves
Skeptic Magazine
Originally published November 2017

Here is an excerpt:

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

The article is here.

Sunday, August 27, 2017

Super-intelligence and eternal life

Transhumanism’s faithful follow it blindly into a future for the elite

Alexander Thomas
The Conversation
First published July 31, 2017

The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.

They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).

Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.

“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.

The article is here.

Friday, August 11, 2017

What an artificial intelligence researcher fears about AI

Arend Hintze
TechXplore.com
Originally published July 14, 2017

Here is an excerpt:

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

The article is here.