Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350
Abstract
AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.
Here are my thoughts:
Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.
There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.
Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.
However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings. For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.
This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.
It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.
In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.