Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, February 13, 2022

Hit by the Virtual Trolley: When is Experimental Ethics Unethical?

Rueda, J. (2022).
ResearchGate.net

Abstract

The  trolley  problem  is  one  of  the  liveliest  research  frameworks  in experimental  ethics. In  the last  decade, social  neuroscience  and experimental  moral psychology  have  gone  beyond  the  studies  with  mere text-based  hypothetical  moral dilemmas. In this article, I present the rationale behind testing the actual behaviour in more realistic scenarios  through Virtual Reality and summarize the body of evidence raised by the experiments with virtual trolley scenarios. Then, I approach the argument of Ramirez and LaBarge (2020), who claim that the virtual simulation of the Footbridge version  of  the  trolley  dilemma  is  an  unethical  research  practice,  and  I  raise  some objections to it. Finally, I provide some reflections about the means and ends of trolley-like scenarios and other sacrificial dilemmas in experimental ethics.

(cut)

From Rethinking the Means and Ends of Trolleyology

The first response states that these studies have no normative relevance at all. A traditional objection to the trolley dilemma pointed to the artificiality of the scenario and its normative uselessness in translating to real contemporary problems (see, for instance, Midgley, cited in Edmonds, 2014, p. 100-101). We have already seen that this is not true. Indeed, the existence of real dilemmas that share structural similarities with hypothetical trolley scenarios makes it  practically useful to test our intuitions on them (Edmonds, 2014). Besides that, a more sophisticated objection claims that intuitive responses to the trolley problem have no ethical value because intuitions are quite unreliable. Cognitive science has frequently shown how fallible, illogical, biased, and irrational many of our intuitive preferences can be. In fact, moral intuitions in text-based trolley dilemmas are subject to morally irrelevant factors such as order (Liao et al., 2012), frame (Cao et al., 2017), or mood (Pastötter et al., 2013). However, the fact that there are wrong or biased intuitions  does  not  mean  that  intuitions  do not  have any  epistemic or  moral  value. Dismissing intuitions because they are subject to implicit psychological factors in favour of armchair ethical theorizing is inconsistent. Empirical evidence should play a role in normative theorizing on trolley dilemmas as long as ethical theorizing is also subject to implicit  psychological  factors—and  which  experimental  research  can  help  to  make explicit (Kahane, 2013).  

The second option states that what should be done as public policy on sacrificial dilemmas is what the majority of people say or do in those situations. In other words, the descriptive results of the experiments show us how we should act at the normative level. Consider the following example from the debate of self-driving vehicles: “We thus argue that any implementation of an ethical decision-making system for a specific context should be based on human decisions made in the same context” (Sütfeld et al., 2017). So, as most people act in a utilitarian way in VR simulations of traffic dilemmas, autonomous cars should act similarly in analogous situations (Sütfeld et al. 2017).