Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Norms. Show all posts
Showing posts with label Social Norms. Show all posts

Wednesday, June 26, 2019

The evolution of human cooperation

Coren Apicella and Joan Silk
Current Biology, Volume 29 (11), pp 447-450.

Darwin viewed cooperation as a perplexing challenge to his theory of natural selection. Natural selection generally favors the evolution of behaviors that enhance the fitness of individuals. Cooperative behavior, which increases the fitness of a recipient at the expense of the donor, contradicts this logic. William D. Hamilton helped to solve the puzzle when he showed that cooperation can evolve if cooperators direct benefits selectively to other cooperators (i.e. assortment). Kinship, group selection and the previous behavior of social partners all provide mechanisms for assortment (Figure 1), and kin selection and reciprocal altruism are the foundation of the kinds of cooperative behavior observed in many animals. Humans also bias cooperation in favor of kin and reciprocating partners, but the scope, scale, and variability of human cooperation greatly exceed that of other animals. Here, we introduce derived features of human cooperation in the context in which they originally evolved, and discuss the processes that may have shaped the evolution of our remarkable capacity for cooperation. We argue that culturally-evolved norms that specify how people should behave provide an evolutionarily novel mechanism for assortment, and play an important role in sustaining derived properties of cooperation in human groups.

Here is a portion of the Summary

Cooperative foraging and breeding provide the evolutionary backdrop for understanding the evolution of cooperation in humans, as the returns from cooperating in these activities would have been high in our hunter-gatherer ancestors. Still, explaining how our ancestors effectively dealt with the problem of free-riders within this context remains a challenge. Derived features of human cooperation, however, give us some indication of the mechanisms that could lead to assortativity. These derived features include: first, the scope of cooperation — cooperation is observed between unrelated and often short-term interactors; second, the scale of cooperation — cooperation extends beyond pairs to include circumscribed groups that vary in size and identity; and third, variation in cooperation — human cooperation varies in both time and space in accordance with cultural and social norms. We argue that this pattern of findings is best explained by cultural evolutionary processes that generate phenotypic assortment on cooperation via a psychology adapted for cultural learning, norm sensitivity and group-mindedness.

The info is here.

Wednesday, February 6, 2019

Are scientists’ reactions to ‘CRISPR babies’ about ethics or self-governance?

Nina Frahm and Tess Doezema
STAT News
Originally published January 28, 2019

Here is an excerpt:

The research community widely agreed that He and his colleagues crossed an ethical line with the first inheritable genetic modification of human beings. Gene-editing experts as well as bioethicists described the transgression as being conducted by a “rogue” individual. But when leading voices such as NIH Director Francis Collins assert that He’s work “represents a deeply disturbing willingness by Dr. He and his team to flout international ethical norms,” what are they actually expressing concern about? Who determines what are the ethics of altering human life?

We believe that the alarm being sounded by the scientific community isn’t really about ethics. It’s about protecting a particular form of scientific self-governance, which the “ethics” discourse supports. What are currently treated as matters of research ethics are in fact political and social questions of fundamental human importance.

Key decisions about when and how it will be appropriate to make inheritable changes to human beings currently lie in the hands of scientists. Although ethics are repeatedly invoked, the most prominent condemnations of He’s actions don’t actually address whether it’s ethical to tinker with human life through gene editing. A largely ignored part of the story are the five “draft ethical principles” of He’s lab at the Southern University of Science and Technology of China. If the outcry from scientists was truly about ethics, we would be seeing a discussion of the relative merits of He’s ethical principles, engagement with their content, and perhaps an exploration of how to jointly achieve a better set of operating principles. Instead, the ethics of using CRISPR for germline gene editing have apparently been determined and settled among scientists, closing down a meaningful debate about the limits and opportunities of genetic engineering.

The info is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Wednesday, July 11, 2018

Could Moral Enhancement Interventions be Medically Indicated?

Sarah Carter
Health Care Analysis
December 2017, Volume 25, Issue 4, pp 338–353

Abstract

This paper explores the position that moral enhancement interventions could be medically indicated (and so considered therapeutic) in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental disorder (Moral Deficiency Disorder) are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent.

The paper is here.

Thursday, July 5, 2018

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Tuesday, June 5, 2018

Norms and the Flexibility of Moral Action

Oriel Feldman Hall, Jae-Young Son, and Joseph Heffner
Preprint

ABSTRACT

A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.

The research is here.

Wednesday, May 16, 2018

Escape the Echo Chamber

C Thi Nguyen
www.medium.com
Originally posted April 12, 2018

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

The information is here.

Thursday, May 10, 2018

The WEIRD Science of Culture, Values, and Behavior

Kim Armstrong
Psychological Science
Originally posted April 2018

Here is an excerpt:

While the dominant norms of a society may shape our behavior, children first experience the influence of those cultural values through the attitudes and beliefs of their parents, which can significantly impact their psychological development, said Heidi Keller, a professor of psychology at the University of Osnabrueck, Germany.

Until recently, research within the field of psychology focused mainly on WEIRD (Western, educated, industrialized, rich, and democratic) populations, Keller said, limiting the understanding of the influence of culture on childhood development.

“The WEIRD group represents maximally 5% of the world’s population, but probably more than 90% of the researchers and scientists producing the knowledge that is represented in our textbooks work with participants from that particular context,” Keller explained.

Keller and colleagues’ research on the ecocultural model of development, which accounts for the interaction of socioeconomic and cultural factors throughout a child’s upbringing, explores this gap in the research by comparing the caretaking styles of rural and urban families throughout India, Cameroon, and Germany. The experiences of these groups can differ significantly from the WEIRD context, Keller notes, with rural farmers — who make up 30% to 40% of the world’s population — tending to live in extended family households while having more children at a younger age after an average of just 7 years of education.

The information is here.

Monday, April 30, 2018

Social norm complexity and past reputations in the evolution of cooperation

Fernando P. Santos, Francisco C. Santos & Jorge M. Pacheco
Nature volume 555, pages 242–245 (08 March 2018)

Abstract

Indirect reciprocity is the most elaborate and cognitively demanding of all known cooperation mechanisms, and is the most specifically human because it involves reputation and status. By helping someone, individuals may increase their reputation, which may change the predisposition of others to help them in future. The revision of an individual’s reputation depends on the social norms that establish what characterizes a good or bad action and thus provide a basis for morality. Norms based on indirect reciprocity are often sufficiently complex that an individual’s ability to follow subjective rules becomes important, even in models that disregard the past reputations of individuals, and reduce reputations to either ‘good’ or ‘bad’ and actions to binary decisions. Here we include past reputations in such a model and identify the key pattern in the associated norms that promotes cooperation. Of the norms that comply with this pattern, the one that leads to maximal cooperation (greater than 90 per cent) with minimum complexity does not discriminate on the basis of past reputation; the relative performance of this norm is particularly evident when we consider a ‘complexity cost’ in the decision process. This combination of high cooperation and low complexity suggests that simple moral principles can elicit cooperation even in complex environments.

The article is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Wednesday, January 3, 2018

The neuroscience of morality and social decision-making

Keith Yoder and Jean Decety
Psychology, Crime & Law
doi: 10.1080/1068316X.2017.1414817

Abstract
Across cultures humans care deeply about morality and create institutions, such as criminal courts, to enforce social norms. In such contexts, judges and juries engage in complex social decision-making to ascertain a defendant’s capacity, blameworthiness, and culpability. Cognitive neuroscience investigations have begun to reveal the distributed neural networks which interact to implement moral judgment and social decision-making, including systems for reward learning, valuation, mental state understanding, and salience processing. These processes are fundamental to morality, and their underlying neural mechanisms are influenced by individual differences in empathy, caring and justice sensitivity. This new knowledge has important implication in legal settings for understanding how triers of fact reason. Moreover, recent work demonstrates how disruptions within the social decision-making network facilitate immoral behavior, as in the case of psychopathy. Incorporating neuroscientific methods with psychology and clinical neuroscience has the potential to improve predictions of recidivism, future dangerousness, and responsivity to particular forms of rehabilitation.

The article is here.

From the Conclusion section:

Current neuroscience work demonstrates that social decision-making and moral reasoning rely on multiple partially overlapping neural networks which support domain general processes, such as executive control, saliency processing, perspective-taking, reasoning, and valuation. Neuroscience investigations have contributed to a growing understanding of the role of these process in moral cognition and judgments of blame and culpability, exactly the sorts of judgments required of judges and juries. Dysfunction of these networks can lead to dysfunctional social behavior and a propensity to immoral behavior as in the case of psychopathy. Significant progress has been made in clarifying which aspects of social decision-making network functioning are most predictive of future recidivism. Psychopathy, in particular, constitutes a complex type of moral disorder and a challenge to the criminal justice system.

Worth reading.....

Wednesday, December 27, 2017

The Phenomenon of ‘Bud Sex’ Between Straight Rural Men

Jesse Singal
thecut.com
Originally posted December 18, 2016

A lot of men have sex with other men but don’t identify as gay or bisexual. A subset of these men who have sex with men, or MSM, live lives that are, in all respects other than their occasional homosexual encounters, quite straight and traditionally masculine — they have wives and families, they embrace various masculine norms, and so on. They are able to, in effect, compartmentalize an aspect of their sex lives in a way that prevents it from blurring into or complicating their more public identities. Sociologists are quite interested in this phenomenon because it can tell us a lot about how humans interpret thorny questions of identity and sexual desire and cultural expectations.

(cut)

Specifically, Silva was trying to understand better the interplay between “normative rural masculinity” — the set of mores and norms that defines what it means to be a rural man — and these men’s sexual encounters. In doing so, he introduces a really interesting and catchy concept, “bud-sex”...

The article is here.

Tuesday, December 19, 2017

Beyond Blaming the Victim: Toward a More Progressive Understanding of Workplace Mistreatment

Lilia M. Cortina, Verónica Caridad Rabelo, & Kathryn J. Holland
Industrial and Organizational Psychology
Published online: 21 November 2017

Theories of human aggression can inform research, policy, and practice in organizations. One such theory, victim precipitation, originated in the field of criminology. According to this perspective, some victims invite abuse through their personalities, styles of speech or dress, actions, and even their inactions. That is, they are partly at fault for the wrongdoing of others. This notion is gaining purchase in industrial and organizational (I-O) psychology as an explanation for workplace mistreatment. The first half of our article provides an overview and critique of the victim precipitation hypothesis. After tracing its history, we review the flaws of victim precipitation as catalogued by scientists and practitioners over several decades. We also consider real-world implications of victim precipitation thinking, such as the exoneration of violent criminals. Confident that I-O can do better, the second half of this article highlights alternative frameworks for researching and redressing hostile work behavior. In addition, we discuss a broad analytic paradigm—perpetrator predation—as a way to understand workplace abuse without blaming the abused. We take the position that these alternative perspectives offer stronger, more practical, and more progressive explanations for workplace mistreatment. Victim precipitation, we conclude, is an archaic ideology. Criminologists have long since abandoned it, and so should we.

The article is here.

Monday, May 15, 2017

Overcoming patient reluctance to be involved in medical decision making

J.S. Blumenthal-Barby
Patient Education and Counseling
January 2017, Volume 100, Issue 1, Pages 14–17

Abstract

Objective

To review the barriers to patient engagement and techniques to increase patients’ engagement in their medical decision-making and care.

Discussion

Barriers exist to patient involvement in their decision-making and care. Individual barriers include education, language, and culture/attitudes (e.g., deference to physicians). Contextual barriers include time (lack of) and timing (e.g., lag between test results being available and patient encounter). Clinicians should gauge patients’ interest in being involved and their level of current knowledge about their condition and options. Framing information in multiple ways and modalities can enhance understanding, which can empower patients to become more engaged. Tools such as decision aids or audio recording of conversations can help patients remember important information, a requirement for meaningful engagement. Clinicians and researchers should work to create social norms and prompts around patients asking questions and expressing their values. Telehealth and electronic platforms are promising modalities for allowing patients to ask questions on in a non-intimidating atmosphere.

Conclusion

Researchers and clinicians should be motivated to find ways to engage patients on the ethical imperative that many patients prefer to be more engaged in some way, shape, or form; patients have better experiences when they are engaged, and engagement improves health outcomes.

The article is here.

Friday, April 14, 2017

The moral bioenhancement of psychopaths

Elvio Baccarini and Luca Malatesti
The Journal of Medical Ethics
http://dx.doi.org/10.1136/medethics-2016-103537

Abstract

We argue that the mandatory moral bioenhancement of psychopaths is justified as a prescription of social morality. Moral bioenhancement is legitimate when it is justified on the basis of the reasons of the recipients. Psychopaths expect and prefer that the agents with whom they interact do not have certain psychopathic traits. Particularly, they have reasons to require the moral bioenhancement of psychopaths with whom they must cooperate. By adopting a public reason and a Kantian argument, we conclude that we can justify to a psychopath being the recipient of mandatory moral bioenhancement because he has a reason to require the application of this prescription to other psychopaths.

Tuesday, March 7, 2017

Chimpanzees’ Bystander Reactions to Infanticide

Claudia Rudolf von Rohr, Carel P. van Schaik, Alexandra Kissling, & Judith M. Burkart
Human Nature
June 2015, Volume 26, Issue 2, pp 143–160

Abstract

Social norms—generalized expectations about how others should behave in a given context—implicitly guide human social life. However, their existence becomes explicit when they are violated because norm violations provoke negative reactions, even from personally uninvolved bystanders. To explore the evolutionary origin of human social norms, we presented chimpanzees with videos depicting a putative norm violation: unfamiliar conspecifics engaging in infanticidal attacks on an infant chimpanzee. The chimpanzees looked far longer at infanticide scenes than at control videos showing nut cracking, hunting a colobus monkey, or displays and aggression among adult males. Furthermore, several alternative explanations for this looking pattern could be ruled out. However, infanticide scenes did not generally elicit higher arousal. We propose that chimpanzees as uninvolved bystanders may detect norm violations but may restrict emotional reactions to such situations to in-group contexts. We discuss the implications for the evolution of human morality.

The article is here.

Monday, March 6, 2017

Almost All Of You Would Cheat And Steal If The People In Charge Imply It's Okay

Charlie Sorrel
www.fastcoexist.com
Originally posted February 2, 2017

Would you cheat on a test to get money? Would you steal from an envelope of cash if you thought nobody would notice? What if the person in charge implied that it was acceptable to lie and steal? That's what Dan Ariely's Corruption Experiment set out to discover. And here's a spoiler: If you're like the rest of the population, you would cheat and steal.

Ariely is a behavioral scientist who specializes in the depressingly bad conduct of humans. In this lecture clip, he details his Corruption Experiment. In it, participants are given a die, and told they can take home the numbers they throw in real dollars. The twist is that they can choose the number on the top or the bottom, and they only need to tell the person running the experiment after they throw. So, if the dice comes up with a one on top, they can claim that they picked the six on the bottom. Not surprisingly, most of the time, people picked the higher number.

The article and the video is here.

Monday, November 21, 2016

A Theory of Hypocrisy

Eric Schwitzgebel
The Splintered Mind blog
Originally posted on October

Here is an excerpt:

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

The blog post is here.

Friday, November 18, 2016

The shame of public shaming

Russell Blackford
The Conversation
Originally published May 6, 2016

Here is an excerpt:

Shaming is on the rise. We’ve shifted – much of the time – to a mode of scrutinising each other for purity. Very often, we punish decent people for small transgressions or for no real transgressions at all. Online shaming, conducted via the blogosphere and our burgeoning array of social networking services, creates an environment of surveillance, fear and conformity.

The making of a call-out culture

I noticed the trend – and began to talk about it – around five years ago. I’d become increasingly aware of cases where people with access to large social media platforms used them to “call out” and publicly vilify individuals who’d done little or nothing wrong. Few onlookers were prepared to support the victims. Instead, many piled on with glee (perhaps to signal their own moral purity; perhaps, in part, for the sheer thrill of the hunt).

Since then, the trend to an online call-out culture has continued and even intensified, but something changed during 2015. Mainstream journalists and public intellectuals finally began to express their unease.

The article is here.

Friday, August 5, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis

By John Danaher
Philosophical Disquisitions
Originally published July 19, 2016

The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.