Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Harm Avoidance. Show all posts
Showing posts with label Harm Avoidance. Show all posts

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Saturday, June 28, 2014

A Seattle doctor is suspended for sexting during surgery

By Lindsey Bever
The Washington Post
Originally published June 10, 2014

Here is an excerpt:

Medical authorities have suspended the license of a Seattle anesthesiologist for allegedly sending explicit “selfies” and exchanging sexy text messages during surgeries.

The findings against 47-year-old Arthur K. Zilberstein, released Monday by the Washington state Department of Health, detail nearly 250 text messages with sexual innuendo he exchanged during procedures — all kinds of procedures, including Cesarean deliveries, pediatric appendectomies, epidurals, tubal ligations, cardiac-probe insertions.

The entire article is here.

Monday, October 28, 2013

The Dangers of Pseudoscience

By MASSIMO PIGLIUCCI and MAARTEN BOUDRY
The New York Times - Opinionator
Originally published October 10, 2013

Philosophers of science have been preoccupied for a while with what they call the “demarcation problem,” the issue of what separates good science from bad science and pseudoscience (and everything in between). The problem is relevant for at least three reasons.

The first is philosophical: Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery. The second reason is civic: our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard. Should the National Institutes of Health finance research on “alternative medicine”? Should the Department of Defense fund studies on telepathy? Third, as an ethical matter, pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare, sometimes fatally so. For instance, millions of people worldwide have died of AIDS because they (or, in some cases, their governments) refuse to accept basic scientific findings about the disease, entrusting their fates to folk remedies and “snake oil” therapies.

The entire article is here.

Saturday, October 12, 2013

How serotonin shapes moral judgment and behavior

By Jenifer Z. Siegel and Molly J. Crockett
Annals of the New York Academy of Sciences
Originally published September 24, 2013

DOI: 10.1111/nyas.12229

Abstract

Neuroscientists are now discovering how hormones and brain chemicals shape social behavior, opening potential avenues for pharmacological manipulation of ethical values. Here, we review recent studies showing how altering brain chemistry can alter moral judgment and behavior, focusing in particular on the neuromodulator serotonin and its role in shaping values related to harm and fairness. We synthesize previous findings and consider the potential mechanisms through which serotonin could increase the aversion to harming others. We present a process model whereby serotonin influences social behavior by shifting social preferences in the positive direction, enhancing the value people place on others’ outcomes. This model may explain previous findings relating serotonin function to prosocial behavior, and makes new predictions regarding how serotonin may influence the neural computation of value in social contexts.

The entire paper is here.