Constantinescu, M., Crisp, R.
Int J of Soc Robotics 14,
1547–1557 (2022).
Abstract
The growing use of social robots in times of isolation refocuses ethical concerns for Human–Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of “virtuous robotic AI systems”. In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human–Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.
Conclusion
AI systems are neither moody nor dissatisfied, and they do not want revenge, which seems to be an important advantage over humans when it comes to making various decisions, including ethical ones. However, from a virtue ethics point of view, this advantage becomes a major drawback. For this also means that they cannot act out of a virtuous character, either. Despite their ability to mimic human virtuous actions and even to function behaviourally in ways equivalent to human beings, robotic AI systems cannot perform virtuous actions in accordance with virtues, that is, rightly or virtuously; nor for the right reasons and motivations; nor through phronesis take into account the right circumstances. And this has the consequence that AI cannot genuinely be virtuous, at least not with the current technological advances supporting their functional development. Nonetheless, it might well be that the more we come to know about AI, the less we know about its future.Footnote9 We therefore leave open the possibility of AI systems being virtuous in some distant future. This might, however, require some disruptive, non-linear evolution that includes, for instance, the possibility that robotic AI systems fully deliberate over their own versus others' goals and happiness and make their own choices and priorities accordinglyFootnote10. Indeed, to be a virtuous agent one needs to have the possibility to make mistakes, to reason over virtuous and vicious lines of action. But then this raises a different question: are we prepared to experience interaction with vicious robotic AI systems?