Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Trolley Problem. Show all posts
Showing posts with label Trolley Problem. Show all posts

Tuesday, June 20, 2023

Ethical Accident Algorithms for Autonomous Vehicles and the Trolley Problem: Three Philosophical Disputes

Sven Nyholm
In Lillehammer, H. (ed.), The Trolley Problem.
Cambridge: Cambridge University Press, 2023

Abstract

The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.

Here is the conclusion:

Accordingly, it seems to me that just as the first methodological approach mentioned a few paragraphs above is problematic, so is the third methodological approach. In other words, we do best to take the second approach. We should neither rely too heavily (or indeed exclusively) on the comparison between the ethics of self-driving cars and the trolley problem, nor wholly ignore and pay no attention to the comparison between the ethics of self-driving cars and the trolley problem. Rather, we do best to make this one – but not the only – thing we do when we think about the ethics of self-driving cars. With what is still a relatively new issue for philosophical ethics to work with, and indeed also regarding older ethical issues that have been around much longer, using a mixed and pluralistic method that approaches the moral issues we are considering from many different angles is surely the best way to go. In this instance, that includes reflecting on – and reflecting critically on – how the ethics of crashes involving self-driving cars is both similar to and different from the philosophy of the trolley problem.

At this point, somebody might say, “what if I am somebody who really dislikes the self-driving cars/trolley problem comparison, and I would really prefer reflecting on the ethics of self-driving cars without spending any time on thinking about the similarities and differences between the ethics of self-driving cars and the trolley problem?” In other words, should everyone working on the ethics of self-driving cars spend at least some of their time reflecting on the comparison with the trolley problem? Luckily for those who are reluctant to spend any of their time reflecting on the self-driving cars/trolley problem comparison, there are others who are willing and able to devote at least some of their energies to this comparison.

In general, I think we should view the community that works on the ethics of this issue as being one in which there can be a division of labor, whereby different members of this field can partly focus on different things, and thereby together cover all of the different aspects that are relevant and important to investigate regarding the ethics of self-driving cars.  As it happens, there has been a remarkable variety in the methods and approaches people have used to address the ethics of self-driving cars (see Nyholm 2018 a-b).  So, while it is my own view that anybody who wants to form a complete overview of the ethics of self-driving cars should, among other things, devote some of their time to studying the comparison with the trolley problem, it is ultimately no big problem if not everyone wishes to do so. There are others who have been studying, and who will most likely continue to reflect on, this comparison.

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Sunday, February 13, 2022

Hit by the Virtual Trolley: When is Experimental Ethics Unethical?

Rueda, J. (2022).
ResearchGate.net

Abstract

The  trolley  problem  is  one  of  the  liveliest  research  frameworks  in experimental  ethics. In  the last  decade, social  neuroscience  and experimental  moral psychology  have  gone  beyond  the  studies  with  mere text-based  hypothetical  moral dilemmas. In this article, I present the rationale behind testing the actual behaviour in more realistic scenarios  through Virtual Reality and summarize the body of evidence raised by the experiments with virtual trolley scenarios. Then, I approach the argument of Ramirez and LaBarge (2020), who claim that the virtual simulation of the Footbridge version  of  the  trolley  dilemma  is  an  unethical  research  practice,  and  I  raise  some objections to it. Finally, I provide some reflections about the means and ends of trolley-like scenarios and other sacrificial dilemmas in experimental ethics.

(cut)

From Rethinking the Means and Ends of Trolleyology

The first response states that these studies have no normative relevance at all. A traditional objection to the trolley dilemma pointed to the artificiality of the scenario and its normative uselessness in translating to real contemporary problems (see, for instance, Midgley, cited in Edmonds, 2014, p. 100-101). We have already seen that this is not true. Indeed, the existence of real dilemmas that share structural similarities with hypothetical trolley scenarios makes it  practically useful to test our intuitions on them (Edmonds, 2014). Besides that, a more sophisticated objection claims that intuitive responses to the trolley problem have no ethical value because intuitions are quite unreliable. Cognitive science has frequently shown how fallible, illogical, biased, and irrational many of our intuitive preferences can be. In fact, moral intuitions in text-based trolley dilemmas are subject to morally irrelevant factors such as order (Liao et al., 2012), frame (Cao et al., 2017), or mood (Pastötter et al., 2013). However, the fact that there are wrong or biased intuitions  does  not  mean  that  intuitions  do not  have any  epistemic or  moral  value. Dismissing intuitions because they are subject to implicit psychological factors in favour of armchair ethical theorizing is inconsistent. Empirical evidence should play a role in normative theorizing on trolley dilemmas as long as ethical theorizing is also subject to implicit  psychological  factors—and  which  experimental  research  can  help  to  make explicit (Kahane, 2013).  

The second option states that what should be done as public policy on sacrificial dilemmas is what the majority of people say or do in those situations. In other words, the descriptive results of the experiments show us how we should act at the normative level. Consider the following example from the debate of self-driving vehicles: “We thus argue that any implementation of an ethical decision-making system for a specific context should be based on human decisions made in the same context” (Sütfeld et al., 2017). So, as most people act in a utilitarian way in VR simulations of traffic dilemmas, autonomous cars should act similarly in analogous situations (Sütfeld et al. 2017).

Sunday, February 6, 2022

Trolley Dilemma in Papua. Yali horticulturalists refuse to pull the lever

Sorokowski, P., Marczak, M., Misiak, M. et al. 
Psychon Bull Rev 27, 398–403 (2020).

Abstract

Although many studies show cultural or ecological variability in moral judgments, cross-cultural responses to the trolley problem (kill one person to save five others) indicate that certain moral principles might be prevalent in human populations. We conducted a study in a traditional, indigenous, non-Western society inhabiting the remote Yalimo valley in Papua, Indonesia. We modified the original trolley dilemma to produce an ecologically valid “falling tree dilemma.” Our experiment showed that the Yali are significantly less willing than Western people to sacrifice one person to save five others in this moral dilemma. The results indicate that utilitarian moral judgments to the trolley dilemma might be less widespread than previously supposed. On the contrary, they are likely to be mediated by sociocultural factors.

Discussion

Our study showed that Yali participants were significantly less willing than Western participants to sacrifice one person to save five others in the moral dilemma. More specifically, the difference was so large that the odds of pushing the tree were approximately 73% smaller for a Papuan in comparison with Canadians.

Our findings reflect cultural differences between the Western and Yali participants, which are illustrated by the two most common explanations provided by Papuans immediately after the experiment. First, owing to the extremely harsh consequences of causing someone’s death in Yali society, our Papuan participants did not want to expose themselves to any potential trouble and were, therefore, unwilling to take any action in the tree dilemma. The rules of conduct in Yali society mean that a person accused of contributing to someone’s death is killed. However, the whole extended family of the blamed individual, and even their village, are also in danger of death (Koch, 1974). This is because the relatives of the deceased person are obliged to compensate for the wrongdoing by killing the same or a greater number of persons.

Another common explanation was related to religion. The Yali often argued that people should not interfere with the divine decision about someone’s life and death (e.g., “I’m not God, so I can’t make the decision”). Hence, although the reason may suggest an action as appropriate, religion suggests otherwise, with religious believers deciding in favor of the latter (Piazza & Landy, 2013). In turn, more traditional populations may refer to religion more than more secular, modern WEIRD populations. 

Tuesday, May 25, 2021

Thought experiments and experimental ethics

Thomas Pölzler & Norbert Paulo (2021)
Inquiry, 
DOI: 10.1080/0020174X.2021.1916218

Abstract

Experimental ethicists investigate traditional ethical questions with nontraditional means, namely with the methods of the empirical sciences. Studies in this area have made heavy use of philosophical thought
experiments such as the well-known trolley cases. Yet, the specific function of these thought experiments within experimental ethics has received little consideration. In this paper we attempt to fill this gap. We begin by describing the function of ethical thought experiments, and show that these thought experiments should not only be classified according to their function but also according to their scope. On this basis we highlight several ways in which the use of thought experiments in experimental ethics can be philosophically relevant. We conclude by arguing that experimental philosophy currently only focuses on a small subcategory of ethical thought experiments and suggest a broadening of its research agenda.

Conclusion

Experimental ethicists investigate traditional ethical questions with nontraditional means, namely with the methods of the empirical sciences. Studies in this area have made heavy use of philosophical thought experiments such as the well-known trolley cases. Yet, for some reason, the specific function of these thought experiments within experimental ethics has received little consideration. In this paper we attempted to fill this gap. First, we described the function of ethical thought experiments, distinguishing between an epistemic, an illustrative and a heuristic function. We also showed that ethical thought experiments should not only be classified according to their function but also according to their scope. Some ethical thought experiments (such as the veil) can be applied to a variety of moral issues. On the basis of this understanding of thought experiments we highlighted several ways in which the use of thought experiments in experimental ethics can be philosophically relevant. Such studies can in particular inform us about the content of the intuitions that people have about ethical thought experiments, these intuitions’ sensitivity to irrelevant factors, and their diversity. Finally, we suggested that experimental ethics broadens its research agenda to include investigations into illustrative and heuristic thought experiments, wide-scope thought experiments, de-biasing strategies, atypical thought experiments, and philosophers’ intuitions about thought experiments. In any case, since experimental ethics heavily relies on thought experiments, an increased theoretical engagement with their function and implications is likely to benefit the field. It is our hope that this paper contributes to promoting such an engagement.

Tuesday, June 23, 2020

The Neuroscience of Moral Judgment: Empirical and Philosophical Developments

J. May, C. I. Workman, J. Haas, & H. Han
Forthcoming in Neuroscience and Philosophy,
eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press).

Abstract

We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of moral judgment in adulthood and to some extent in childhood and adolescence. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.

From the Conclusion

6.1 Reason vs. Emotion in Ethics

The dichotomy between reason and emotion stretches back to antiquity. But an improved understanding of the brain has, arguably more than psychological science, questioned the dichotomy (Huebner 2015; Woodward 2016). Brain areas associated with prototypical emotions, such as vmPFC and amygdala, are also necessary for complex learning and inference, even if largely automatic and unconscious. Even psychopaths, often painted as the archetype of emotionless moral monsters, have serious deficits in learning and inference. Moreover, even if our various moral judgments about trolley problems, harmless taboo violations, and the like are often automatic, they are nonetheless acquired through sophisticated learning mechanisms that are responsive to morally-relevant reasons (Railton 2017; Stanley et al. 2019). Indeed, normal moral judgment often involves gut feelings being attuned to relevant experience and made consistent with our web of moral beliefs (May & Kumar 2018).

The paper can be downloaded here.

Tuesday, February 18, 2020

Is it okay to sacrifice one person to save many? How you answer depends on where you’re from.

Sigal Samuel
vox.com
Originally posted 24 Jan 20

Here is an excerpt:

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

The info is here.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Quillette
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.

Saturday, January 27, 2018

Evolving Morality

Joshua Greene
Aspen Ideas Festival
2017

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

The one-hour talk on SoundCloud is here.

Wednesday, October 11, 2017

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, July 18, 2017

Human decisions in moral dilemmas are largely described by Utilitarianism

Anja Faulhaber, Anke Dittmer, Felix Blind, and others

Abstract

Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in a utilitarian way, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past years; especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as the driver in a virtual reality environment. Participants had to make decisions between two discrete options: driving on one of two lanes where different obstacles came into view. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, we tested the influence of a sidewalk as a potential safe harbor and a condition implicating a self-sacrifice. Results showed that subjects, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence of the other variables. Our findings support that people’s behavior is in line with the utilitarian approach to moral decision making. This may serve as a guideline for the
implementation of moral decisions in ADVs.

The article is here.

Thursday, March 16, 2017

Mercedes-Benz’s Self-Driving Cars Would Choose Passenger Lives Over Bystanders

David Z. Morris
Fortune
Originally published Oct 15, 2016

In comments published last week by Car and Driver, Mercedes-Benz executive Christoph von Hugo said that the carmaker’s future autonomous cars will save the car’s driver and passengers, even if that means sacrificing the lives of pedestrians, in a situation where those are the only two options.

“If you know you can save at least one person, at least save that one,” von Hugo said at the Paris Motor Show. “Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”

This doesn't mean Mercedes' robotic cars will neglect the safety of bystanders. Von Hugo, who is the carmaker’s manager of driver assistance and safety systems, is addressing the so-called “Trolley Problem”—an ethical thought experiment that applies to human drivers just as much as artificial intelligences.

The article is here.

Thursday, February 9, 2017

Wednesday, October 12, 2016

Utilitarian preferences or action preferences? De-confounding action and moral code in sacrificial dilemmas

Damien L. Crone & Simon M. Laham
Personality and Individual Differences, Volume 104, January 2017, Pages 476-481

Abstract

A large literature in moral psychology investigates utilitarian versus deontological moral preferences using sacrificial dilemmas (e.g., the Trolley Problem) in which one can endorse harming one person for the greater good. The validity of sacrificial dilemma responses as indicators of one's preferred moral code is a neglected topic of study. One underexplored cause for concern is that standard sacrificial dilemmas confound the endorsement of specific moral codes with the endorsement of action such that endorsing utilitarianism always requires endorsing action. Two studies show that, after de-confounding these factors, the tendency to endorse action appears about as predictive of sacrificial dilemma responses as one's preference for a particular moral code, suggesting that, as commonly used, sacrificial dilemma responses are poor indicators of moral preferences. Interestingly however, de-confounding action and moral code may provide a more valid means of inferring one's preferred moral code.

The article is here.

Monday, October 3, 2016

Moral learning: Why learning? Why moral? And why now?

Peter Railton
Cognition

Abstract

What is distinctive about a bringing a learning perspective to moral psychology? Part of the answer lies in the remarkable transformations that have taken place in learning theory over the past two decades, which have revealed how powerful experience-based learning can be in the acquisition of abstract causal and evaluative representations, including generative models capable of attuning perception, cognition, affect, and action to the physical and social environment. When conjoined with developments in neuroscience, these advances in learning theory permit a rethinking of fundamental questions about the acquisition of moral understanding and its role in the guidance of behavior. For example, recent research indicates that spatial learning and navigation involve the formation of non-perspectival as well as ego-centric models of the physical environment, and that spatial representations are combined with learned information about risk and reward to guide choice and potentiate further learning. Research on infants provides evidence that they form non-perspectival expected-value representations of agents and actions as well, which help them to navigate the human environment. Such representations can be formed by highly-general mental processes such as causal and empathic simulation, and thus afford a foundation for spontaneous moral learning and action that requires no innate moral faculty and can exhibit substantial autonomy with respect to community norms. If moral learning is indeed integral with the acquisition and updating of casual and evaluative models, this affords a new way of understanding well-known but seemingly puzzling patterns in intuitive moral judgment—including the notorious “trolley problems.”

The article is here.

Monday, August 29, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

By Jacob Brogan
Future Tense
Originally published August 11, 2016

Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?

It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way. You’ll have to decide whether a vehicle should choose its passengers or people in an intersection. Others will present two differently composed groups of pedestrians—say, a handful of female doctors or a collection of besuited men—and ask which an empty car should slam into. Further complications—including the presence of animals and details about whether the pedestrians have the right of way—sometimes further muddle the question.

Tuesday, June 7, 2016

Student Resistance to Thought Experiments

Regina A. Rini
APA Newsletter - Teaching Philosophy
Spring 2016, Volume 15 (2)

Introduction

From Swampmen to runaway trolleys, philosophers make routine use of thought experiments. But our students are not always so enthusiastic. Most teachers of introductory philosophy will be familiar with the problem: students push back against the use of thought experiments, and not for the reasons that philosophers are likely to accept. Rather than challenge whether the thought experiments actually
support particular conclusions, students instead challenge their realism or their relevance.

In this article I will look at these sorts of challenges, with two goals in mind. First, there is a practical pedagogical goal: How do we guide students to overcome their resistance to a useful method? Second, there is something I will call “pedagogical bad faith.” Many of us actually do have sincere doubts, as professional philosophers, about the value of thought experiment methodology. Some of
these doubts in fact correspond to our students’ naïve resistance. But we often decide, for pedagogical reasons, to avoid mentioning our own doubts to students. Is this practice defensible?

The article is here.

Editor's Note: I agree with this article in many ways.  After I have read a philosophy article and a podcast using a thought experiment, I provided critiques regarding how the thought experiments were limited to the author. My criticisms were dismissed with a more ad hominem attack of my lack of understanding of philosophy or how philosophers work.  I was told I should read more philosophy, especially Derek Parfit.  I wish I had this article several years ago.

Monday, October 26, 2015

Would You Pull the Trolley Switch? Does it Matter?

By Lauren Cassani Davis
The Atlantic
Originally published October 9, 2015

Here is an excerpt:

The trolley dilemmas vividly distilled the distinction between two different concepts of morality: that we should choose the action with the best overall consequences (in philosophy-speak, utilitarianism is the most well-known example of this), like only one person dying instead of five, and the idea that we should always adhere to strict duties, like “never kill a human being.” The subtle differences between the scenarios provided helped to articulate influential concepts, like the distinction between actively killing someone versus passively letting them die, that continue to inform contemporary debates in law and public policy. The trolley problem has also been, and continues to be, a compelling teaching tool within philosophy.

By the late ‘90s, trolley problems had fallen out of fashion. Many philosophers questioned the value of the conclusions reached by analyzing a situation so bizarre and specific.

The entire article is here.