By John Basl
Northeastern University
Introduction
The purpose of this essay is to raise the prospect that engaging in artificial consciousness research, research that aims to create artifactual entities with conscious states of certain kinds, might be unethical on grounds that it wrongs or will very likely wrong the subjects of such research. I say might be unethical because, in the end, it will depend on how those entities are created and how they are likely to be treated. This essay is meant to be a starting point in thinking about the ethics of artificial consciousness research ethics, not, by any means, the final word on such matters. While the ethics of the creation and proliferation of artificial intelligences and artificial consciousnesses (see, for example, (Chalmers 2010) has often been explored both in academic settings and in popular media and literature, those discussions tend to focus on the consequences for humans or, at most, the potential rights of machines that are very much like us. However, the subjects of artificial consciousness research, at least those subjects that end up being conscious in particular ways, are research subjects in the way that sentient non-human animals or human subjects are research subjects and so should be afforded appropriate protections. Therefore, it is important to ask not only whether artificial consciousnesses that are integrated into our society should be afforded moral and legal protections and whether they are a risk to our safety or existence, but whether the predecessors to such consciousnesses are wronged in their creation or in the research involving them.
The entire article is here.