Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Sentience. Show all posts
Showing posts with label Sentience. Show all posts

Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.

Saturday, June 24, 2023

The Darwinian Argument for Worrying About AI

Dan Hendrycks
Time.com
Originally posted 31 May 23

Here is an excerpt:

In the biological realm, evolution is a slow process. For humans, it takes nine months to create the next generation and around 20 years of schooling and parenting to produce fully functional adults. But scientists have observed meaningful evolutionary changes in species with rapid reproduction rates, like fruit flies, in fewer than 10 generations. Unconstrained by biology, AIs could adapt—and therefore evolve—even faster than fruit flies do.

There are three reasons this should worry us. The first is that selection effects make AIs difficult to control. Whereas AI researchers once spoke of “designing” AIs, they now speak of “steering” them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a “black box” with a decision-making process largely indecipherable to humans.

Second, evolution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. AIs that successfully gain influence and provide economic value will predominate, replacing AIs that act in a more narrow and constrained manner, even if this comes at the cost of lowering guardrails and safety measures. As an example, most businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that engages in such selfish behavior will most likely outperform its more principled competitors.

Selfishness doesn’t require malice or even sentience. When an AI automates a task and leaves a human jobless, this is selfish behavior without any intent. If competitive pressures continue to drive AI development, we shouldn’t be surprised if they act selfishly too.

The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Skeptics of AI risks often ask, “Couldn’t we just turn the AI off?” There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse. But natural selection poses a more fundamental barrier: we will select against AIs that are easy to turn off, and we will come to depend on AIs that we are less likely to turn off.

These strong economic and strategic pressures to adopt the systems that are most effective mean that humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species. There are no easy, surefire solutions to our predicament.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Friday, May 21, 2021

In search of the moral status of AI: why sentience is a strong argument

Gibert, M., Martin, D. 
AI & Soc (2021). 
https://doi.org/10.1007/s00146-021-01179-z

Abstract

Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development.

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.