Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Humanoid robots. Show all posts
Showing posts with label Humanoid robots. Show all posts

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00231-y

Abstract

The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.