Harris, J., Anthis, J.R.
Sci Eng Ethics 27, 53 (2021).
https://doi.org/10.1007/s11948-021-00331-8
Abstract
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Concluding Remarks
Many scholars lament that the moral consideration of artificial entities is discussed infrequently and not viewed as a proper object of academic inquiry. This literature review suggests that these perceptions are no longer entirely accurate. The number of publications is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration. Still, there are important gaps remaining, suggesting promising opportunities for further research, and the field remains small overall with only 294 items identified in this review.
These discussions have taken place largely separately from each other: legal rights, moral consideration, empirical research on human attitudes, and theoretical exploration of the risks of astronomical suffering among future artificial entities. Further contributions should seek to better integrate these discussions. The analytical frameworks used in one topic may offer valuable contributions to another. For example, what do legal precedent and empirical psychological research suggest are the most likely outcomes for future artificial sentience (as an example of studying likely technological outcomes, see Reese and Mohorčich, 2019)? What do virtue ethics and rights theories suggest is desirable in these plausible future scenarios?
Despite interest in the topic from policy-makers and the public, there is a notable lack of empirical data about attitudes towards the moral consideration of artificial entities. This leaves scope for surveys and focus groups on a far wider range of predictors of attitudes, experiments that test the effect of various messages and content on these attitudes, and qualitative and computational text analysis of news articles, opinion pieces, and science fiction books and films that touch on these topics. There are also many theoretically interesting questions to be asked about how these attitudes relate to other facets of human society, such as human in-group-out-group and human-animal interactions.