Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Systems. Show all posts
Showing posts with label Moral Systems. Show all posts

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Thursday, October 19, 2017

Is There an Ideal Amount of Income Inequality?

Brian Gallagher
Nautilus
Originally published September 28, 2017

Here is an excerpt:

Is extreme inequality a serious problem?

Extreme inequality in the United States, and elsewhere, is deeply troubling on a number of fronts. First, there is the moral issue. For a country explicitly founded on the principles of liberty, equality, and the pursuit of happiness, protected by the “government of the people, by the people, for the people,” extreme inequality raises troubling questions of social justice that get at the very foundations of our society. We seem to have a “government of the 1 percent by the 1 percent for the 1 percent,” as the economics Nobel laureate Joseph Stiglitz wrote in his Vanity Fair essay. The Harvard philosopher Tim Scanlon argues that extreme inequality is bad for the following reasons: (1) economic inequality can give wealthier people an unacceptable degree of control over the lives of others; (2) economic inequality can undermine the fairness of political institutions; (3) economic inequality undermines the fairness of the economic system itself; and (4) workers, as participants in a scheme of cooperation that produces national income, have a claim to a fair share of what they have helped to produce.

You’re an engineer. How did you get interested in inequality?

I do design, control, optimization, and risk management for a living. I’m used to designing large systems, like chemical plants. I have a pretty good intuition for how systems will operate, how  they can run efficiently, and how they may fail. When I started thinking about the free market and society as systems, I already had an intuitive grasp about their function. Clearly there are differences between a system of inanimate entities, like chemical plants, and human society. But they’re both systems, so there’s a lot of commonalities as well. My experience as a systems engineer helped me as I was groping in the darkness to get my hand around these issues, and to ask the right questions.

The article is here.

Sunday, September 11, 2016

Morality (Book Chapter)

Jonathan Haidt and Selin Kesebir
Handbook of Social Psychology. (2010) 3:III:22.

Here is a portion of the conclusion:

 The goal of this chapter was to offer an account of what morality really is, where it came from, how it works, and why McDougall was right to urge social psychologists to make morality one of their fundamental concerns. The chapter used a simple narrative device to make its literature review more intuitively compelling: It told the history of moral psychology as a fall followed by redemption. (This is one of several narrative forms that people spontaneously use when telling the stories of their lives [McAdams, 2006]). To create the sense of a fall, the chapter began by praising the ancients and their virtue - based ethics; it praised some early sociologists and psychologists (e.g., McDougall, Freud, and Durkheim) who had “ thick ” emotional and sociological conceptions of morality; and it praised Darwin for his belief that intergroup competition contributed to the evolution of morality. The chapter then suggested that moral psychology lost these perspectives in the twentieth century as many psychologists followed philosophers and other social scientists in embracing rationalism and methodological individualism. Morality came to be studied primarily as a set of beliefs and cognitive abilities, located in the heads of individuals, which helped individuals to solve quandaries about helping and hurting other individuals. In this narrative, evolutionary theory also lost something important (while gaining much else) when it focused on morality as a set of strategies, coded into the genes of individuals, that helped individuals optimize their decisions about cooperation and defection when interacting with strangers. Both of these losses or “ narrowings ” led many theorists to think that altruistic acts performed toward strangers are the quintessence of morality.

The book chapter is here.

This chapter is an excellent summary for students or those beginning to read on moral psychology.

Thursday, September 12, 2013

Why Evolutionary Science Is The Key To Moral Progress

By Michael E. Price
This View of Life
Originally published July 16, 2013

Here is an excerpt:

Morality is centrally important to human affairs, for two main reasons. First, cross-culturally, the well-being of individuals is strongly affected by their moral standing: an individual held in high moral regard may be praised, rewarded, or celebrated as a hero, whereas one held in low regard may be admonished, ostracized, or put to death. Second, a society’s ability to compete with other societies may depend heavily on the content of its moral system: a moral system that successfully promotes values associated with economic and political competitiveness, for example, can be hugely advantageous to the society that hosts it. Our moral beliefs, then, have a critical impact on the fates of both the individuals we judge, and the societies to which we belong.