Do we have a duty to use moral neurointerventions to correct deficits in our moral psychology? On their surface, these technologies appear to pose worrisome risks to valuable dimensions of the self, and these risks could conceivably weigh against any prima facie moral duty we have to use these technologies. Focquaert and Schermer (Neuroethics 8(2):139–151, 2015) argue that neurointerventions pose special risks to the self because they operate passively on the subject’s brain, without her active participation, unlike ‘active’ interventions. Some neurointerventions, however, appear to be relatively unproblematic, and some appear to preserve the agent’s sense of self precisely because they operate passively. In this paper, I propose three conditions that need to be met for a medical intervention to be considered low-risk, and I say that these conditions cut across the active/passive divide. A low-risk intervention must: (i) pass pre-clinical and clinical trials, (ii) fare well in post-clinical studies, and (iii) be subject to regulations protecting informed consent. If an intervention passes these tests, its risks do not provide strong countervailing reasons against our prima facie duty to undergo the intervention.
The article is here.