Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Anthropomorphism. Show all posts
Showing posts with label Anthropomorphism. Show all posts

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350

Abstract

AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.


Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Saturday, December 19, 2020

Robots at work: People prefer—and forgive—service robots with perceived feelings

Yam, K. C, Bingman, Y. E. et. al.
Journal of Applied Psychology. 
Advance online publication. 

Abstract

Organizations are increasingly relying on service robots to improve efficiency, but these robots often make mistakes, which can aggravate customers and negatively affect organizations. How can organizations mitigate the frontline impact of these robotic blunders? Drawing from theories of anthropomorphism and mind perception, we propose that people evaluate service robots more positively when they are anthropomorphized and seem more humanlike—capable of both agency (the ability to think) and experience (the ability to feel). We further propose that in the face of robot service failures, increased perceptions of experience should attenuate the negative effects of service failures, whereas increased perceptions of agency should amplify the negative effects of service failures on customer satisfaction. In a field study conducted in the world’s first robot-staffed hotel (Study 1), we find that anthropomorphism generally leads to higher customer satisfaction and that perceived experience, but not agency, mediates this effect. Perceived experience (but not agency) also interacts with robot service failures to predict customer satisfaction such that high levels of perceived experience attenuate the negative impacts of service failures on customer satisfaction. We replicate these results in a lab experiment with a service robot (Study 2). Theoretical and practical implications are discussed.

From Practical Contributions

Second, our findings also suggest that organizations should focus on encouraging perceptions of service robots’ experience rather than agency. For example, when assigning names to robots or programming robots’ voices, a female name and voice could potentially lead to enhanced perceptions of experience more so than a male name and voice (Gray et al., 2007). Likewise, service robots’ programmed scripts should include content that conveys the capacity of experience, such as displaying emotions. Although
the emerging service robotic technologies are not perfect and failures are inevitable, encouraging anthropomorphism and, more specifically, perceptions of experience can likely offset the negative effects of robot service failures.