Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 2, 2024

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

Kneer, Markus (2021).
Cognitive Science, 45(10), e13032

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Conclusion

In a preregistered experiment, I explored the folk concept of lying for both human agents and robots. Consistent with previous findings for human agents, the majority of participants think that it is possible to lie with a true claim, and hence in cases where there is no actual deception. What seems to matter more for lying are intentions to deceive. Contrary to what might have been expected, intentions of this sort are equally ascribed to robots as to humans. It thus comes as no surprise that robots are judged as lying, and blameworthy for it, to similar degrees as human agents. Future work in this area should attempt to replicate these findings manipulating context and methodology. Ethicists and legal scholars should explore whether, and to what degree, it might be morally appropriate and legally necessary to restrict the use of deceptive artificial agents.

Here is a summary:

This research dives into whether people perceive robots as capable of lying. The study investigates the concept of lying and its application to artificial intelligence (AI) through experiments. Kneer explores if humans ascribe deceitful intent to robots and judge their deceptions as harshly as human lies. The findings suggest that people are likely to consider robots capable of lying and hold them accountable for deception. The study argues that this necessitates further exploration of the ethical implications of robot deception in our interactions with AI.