Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Psychology. Show all posts
Showing posts with label Moral Psychology. Show all posts

Saturday, November 4, 2017

Morally Reframed Arguments Can Affect Support for Political Candidates

Jan G. Voelkel and Matthew Feinberg
Social Psychological and Personality Science
First Published September 28, 2017

Abstract

Moral reframing involves crafting persuasive arguments that appeal to the targets’ moral values but argue in favor of something they would typically oppose. Applying this technique to one of the most politically polarizing events—political campaigns—we hypothesized that messages criticizing one’s preferred political candidate that also appeal to that person’s moral values can decrease support for the candidate. We tested this claim in the context of the 2016 American presidential election. In Study 1, conservatives reading a message opposing Donald Trump grounded in a more conservative value (loyalty) supported him less than conservatives reading a message grounded in more liberal concerns (fairness). In Study 2, liberals reading a message opposing Hillary Clinton appealing to fairness values were less supportive of Clinton than liberals in a loyalty-argument condition. These results highlight how moral reframing can be used to overcome the rigid stances partisans often hold and help develop political acceptance.

The research is here.

Monday, October 16, 2017

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.

Abstract

In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Monday, October 2, 2017

The Role of a “Common Is Moral” Heuristic in the Stability and Change of Moral Norms

Lindström, B., Jangard, S., Selbing, I., & Olsson, A. (2017).
Journal of Experimental Psychology: General.

Abstract

Moral norms are fundamental for virtually all social interactions, including cooperation. Moral norms develop and change, but the mechanisms underlying when, and how, such changes occur are not well-described by theories of moral psychology. We tested, and confirmed, the hypothesis that the commonness of an observed behavior consistently influences its moral status, which we refer to as the common is moral (CIM) heuristic. In 9 experiments, we used an experimental model of dynamic social interaction that manipulated the commonness of altruistic and selfish behaviors to examine the change of peoples’ moral judgments. We found that both altruistic and selfish behaviors were judged as more moral, and less deserving of punishment, when common than when rare, which could be explained by a classical formal model (social impact theory) of behavioral conformity. Furthermore, judgments of common versus rare behaviors were faster, indicating that they were computationally more efficient. Finally, we used agent-based computer simulations to investigate the endogenous population dynamics predicted to emerge if individuals use the CIM heuristic, and found that the CIM heuristic is sufficient for producing 2 hallmarks of real moral norms; stability and sudden changes. Our results demonstrate that commonness shapes our moral psychology through mechanisms similar to behavioral conformity with wide implications for understanding the stability and change of moral norms.

The article is here.

Tuesday, September 26, 2017

The Influence of War on Moral Judgments about Harm

Hanne M Watkins and Simon M Laham
Preprint

Abstract

How does war influence moral judgments about harm? While the general rule is “thou shalt not kill,” war appears to provide an unfortunately common exception to the moral prohibition on intentional harm. In three studies (N = 263, N = 557, N = 793), we quantify the difference in moral judgments across peace and war contexts, and explore two possible explanations for the difference. Taken together, the findings of the present studies have implications for moral psychology researchers who use war based scenarios to study broader cognitive or affective processes. If the war context changes judgments of moral scenarios by triggering group-based reasoning or altering the perceived structure of the moral event, using such scenarios to make “decontextualized” claims about moral judgment may not be warranted.

Here is part of the discussion.

A number of researchers have begun to investigate how social contexts may influence moral judgment, whether those social contexts are grounded in groups (Carnes et al, 2015; Ellemers & van den Bos, 2009) or relationships (Fiske & Rai, 2014; Simpson, Laham, & Fiske, 2015). The war context is another specific context which influences moral judgments: in the present study we found that the intergroup nature of war influenced people’s moral judgments about harm in war – even if they belonged to neither of the two groups actually at war – and that the usually robust difference between switch and footbridge scenarios was attenuated in the war context. One implication of these findings is that some caution may be warranted when using war-based scenarios for studying morality in general. As mentioned in the introduction, scenarios set in war are often used in the study of broad domains or general processes of judgment (e.g. Graham et al., 2009; Phillips & Young, 2011; Piazza et al., 2013). Given the interaction of war context with intergroup considerations and with the construed structure of the moral event in the present studies, researchers are well advised to avoid making generalizations to morality writ large on the basis of war-related scenarios (see also Bauman, McGraw, Bartels, & Warren, 2014; Bloom, 2011).

The preprint is here.

Wednesday, October 12, 2016

Utilitarian preferences or action preferences? De-confounding action and moral code in sacrificial dilemmas

Damien L. Crone & Simon M. Laham
Personality and Individual Differences, Volume 104, January 2017, Pages 476-481

Abstract

A large literature in moral psychology investigates utilitarian versus deontological moral preferences using sacrificial dilemmas (e.g., the Trolley Problem) in which one can endorse harming one person for the greater good. The validity of sacrificial dilemma responses as indicators of one's preferred moral code is a neglected topic of study. One underexplored cause for concern is that standard sacrificial dilemmas confound the endorsement of specific moral codes with the endorsement of action such that endorsing utilitarianism always requires endorsing action. Two studies show that, after de-confounding these factors, the tendency to endorse action appears about as predictive of sacrificial dilemma responses as one's preference for a particular moral code, suggesting that, as commonly used, sacrificial dilemma responses are poor indicators of moral preferences. Interestingly however, de-confounding action and moral code may provide a more valid means of inferring one's preferred moral code.

The article is here.

Thursday, September 22, 2016

Does Situationism Threaten Free Will and Moral Responsibility?

Michael McKenna and Brandon Warmke
Journal of Moral Psychology

Abstract

The situationist movement in social psychology has caused a considerable stir in philosophy over the last fifteen years. Much of this was prompted by the work of the philosophers Gilbert Harman (1999) and John Doris (2002). Both contended that familiar philosophical assumptions about the role of character in the explanation of human action were not supported by the situationists experimental results. Most of the ensuing philosophical controversy has focused upon issues related to moral psychology and ethical theory, especially virtue ethics. More recently, the influence of situationism has also given rise to further questions regarding free will and moral responsibility (e.g., Brink 2013; Ciurria 2013; Doris 2002; Mele and Shepherd 2013; Miller 2016; Nelkin 2005; Talbert 2009; and Vargas 2013b). In this paper, we focus just upon these latter issues. Moreover, we focus primarily on reasons-responsive theories. There is cause for concern that a range of situationist findings are in tension with the sort of reasons-responsiveness putatively required for free will and moral responsibility. Here, we develop and defend a response to the alleged situationist threat to free will and moral responsibility that we call pessimistic realism. We conclude on an optimistic note, however, exploring the possibility of strengthening our agency in the face of situational influences.

The article is here.

Sunday, September 11, 2016

Morality (Book Chapter)

Jonathan Haidt and Selin Kesebir
Handbook of Social Psychology. (2010) 3:III:22.

Here is a portion of the conclusion:

 The goal of this chapter was to offer an account of what morality really is, where it came from, how it works, and why McDougall was right to urge social psychologists to make morality one of their fundamental concerns. The chapter used a simple narrative device to make its literature review more intuitively compelling: It told the history of moral psychology as a fall followed by redemption. (This is one of several narrative forms that people spontaneously use when telling the stories of their lives [McAdams, 2006]). To create the sense of a fall, the chapter began by praising the ancients and their virtue - based ethics; it praised some early sociologists and psychologists (e.g., McDougall, Freud, and Durkheim) who had “ thick ” emotional and sociological conceptions of morality; and it praised Darwin for his belief that intergroup competition contributed to the evolution of morality. The chapter then suggested that moral psychology lost these perspectives in the twentieth century as many psychologists followed philosophers and other social scientists in embracing rationalism and methodological individualism. Morality came to be studied primarily as a set of beliefs and cognitive abilities, located in the heads of individuals, which helped individuals to solve quandaries about helping and hurting other individuals. In this narrative, evolutionary theory also lost something important (while gaining much else) when it focused on morality as a set of strategies, coded into the genes of individuals, that helped individuals optimize their decisions about cooperation and defection when interacting with strangers. Both of these losses or “ narrowings ” led many theorists to think that altruistic acts performed toward strangers are the quintessence of morality.

The book chapter is here.

This chapter is an excellent summary for students or those beginning to read on moral psychology.

Sunday, August 28, 2016

What Is Happening to Our Country? How Psychology Can Respond to Political Polarization, Incivility and Intolerance



As political events in Europe and America got stranger and more violent over the last year, I found myself thinking of the phrase “things fall apart; the center cannot hold.” I didn’t know its origin so I looked it up, found the poem The Second Coming, by W. B. Yeats, and found a great deal of wisdom. Yeats wrote it in 1919, just after the First World War and at the beginning of the Irish War of Independence.

The entire web page is here.

Tuesday, July 12, 2016

Why Bioethics Needs a Disability Moral Psychology

Joseph A. Stramondo
Hastings Center Report
Volume 46, Issue 3, pages 22–30, May/June 2016

Abstract

The deeply entrenched, sometimes heated conflict between the disability movement and the profession of bioethics is well known and well documented. Critiques of prenatal diagnosis and selective abortion are probably the most salient and most sophisticated of disability studies scholars’ engagements with bioethics, but there are many other topics over which disability activists and scholars have encountered the field of bioethics in an adversarial way, including health care rationing, growth-attenuation interventions, assisted reproduction technology, and physician-assisted suicide.


The tension between the analyses of the disabilities studies scholars and mainstream bioethics is not merely a conflict between two insular political groups, however; it is, rather, also an encounter between those who have experienced disability and those who have not. This paper explores that idea. I maintain that it is a mistake to think of this conflict as arising just from a difference in ideology or political commitments because it represents a much deeper difference—one rooted in variations in how human beings perceive and reason about moral problems. These are what I will refer to as variations of moral psychology. The lived experiences of disability produce variations in moral psychology that are at the heart of the moral conflict between the disability movement and mainstream bioethics. I will illustrate this point by exploring how the disability movement and mainstream bioethics come into conflict when perceiving and analyzing the moral problem of physician-assisted suicide via the lens of the principle of respect for autonomy. To reconcile its contemporary and historical conflict with the disability movement, the field of bioethics must engage with and fully consider the two groups’ differences in moral perception and reasoning, not just the explicit moral and political arguments of the disability movement.

The article is here.

Saturday, July 2, 2016

Selfishness Is Learned

By Matthew Hutson
Nautilus
Originally posted June 9, 2016

Many people cheat on taxes—no mystery there. But many people don’t, even if they wouldn’t be caught—now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.

It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”

The article is here.

Editor's note: There is a nice review of relevant research in this article.

Wednesday, June 8, 2016

Are You Morally Modified?: The Moral Effects of Widely Used Pharmaceuticals

Neil Levy, Thomas Douglas, Guy Kahane, Sylvia Terbeck, Philip J. Cowen, Miles
Hewstone, and Julian Savulescu
Philos Psychiatr Psychol. 2014 June 1; 21(2): 111–125.
doi:10.1353/ppp.2014.0023.

Abstract

A number of concerns have been raised about the possible future use of pharmaceuticals designed
to enhance cognitive, affective, and motivational processes, particularly where the aim is to
produce morally better decisions or behavior. In this article, we draw attention to what is arguably
a more worrying possibility: that pharmaceuticals currently in widespread therapeutic use are
already having unintended effects on these processes, and thus on moral decision making and
morally significant behavior. We review current evidence on the moral effects of three widely
used drugs or drug types: (i) propranolol, (ii) selective serotonin reuptake inhibitors, and (iii)
drugs that effect oxytocin physiology. This evidence suggests that the alterations to moral decision
making and behavior caused by these agents may have important and difficult-to-evaluate
consequences, at least at the population level. We argue that the moral effects of these and other
widely used pharmaceuticals warrant further empirical research and ethical analysis.

The paper is here.

Thursday, April 28, 2016

The Visual Guide to Morality: Vision as an Integrative Analogy for Moral Experience, Variability and Mechanism

Chelsea Schein, Neil Hester and Kurt Gray
Social and Personality Psychology Compass 10/4 (2016): 231–251

Abstract

Analogies help organize, communicate and reveal scientific phenomena. Vision may be the best analogy for understanding moral judgment. Although moral psychology has long noted similarities between seeing and judging, we systematically review the “morality is like vision” analogy through
three elements: experience, variability and mechanism. Both vision and morality are experienced as automatic, durable and objective. However, despite feelings of objectivity, both vision and morality show substantial variability across biology, culture and situation. The paradox of objective experience and cultural subjectivity is best understood through constructionism, as both vision and morality involve the flexible combination of more basic ingredients. Specifically, both vision and morality involve a mechanism that demonstrates Gestalt, combination and coherence. The “morality is like vision” analogy not only provides intuitive organization and compelling communication for moral psychology but also speaks to debates in the field, such as intuition versus reason, pluralism versus universalism and modularity versus constructionism.

The article is here.

Friday, February 26, 2016

The problem with cognitive and moral psychology

Massimo Pigliucci and K.D. Irani
Plato's Footprint
Originally published February 8, 2016

Here is an excerpt:

The norm of cooperation is again presupposed as the fundamental means for deciding which of our moral intuitions we should heed. When discussing the more stringent moral principles that Peter Singer, for instance, takes to be rationally required of us concerning our duties to distant strangers, Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings.” But of course evolution is what provided us with the very limited moral instinct that Bloom himself concedes needs to be expanded through the use of reason! He seems to want to have it both ways: we ought to build on what nature gave us, so long as what we come up with is compatible with nature’s narrow demands. But why?

Let me quote once more from Shaw, who I think puts her finger precisely where the problem lies: “it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong. The underlying prescriptive moral standards are always presupposed antecedently to any psychological research … No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new ‘technologies of behavior’ [to use B.F. Skinner’s phrase] but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.”

Please note that Shaw isn’t saying that moral philosophers are the high priests to be called on, though I’m sure she would agree that those are the people that have thought longer and harder about the issues in question, and so should certainly get a place at the discussion table. She is saying that good reasoning in general, and good moral reasoning in particular, are something we all need to engage in, for the sake of our own lives and of society at large.

The entire article is here.

Saturday, February 20, 2016

Moral Nativism and Moral Psychology

By Paul Bloom
The Social Psychology of Morality 01/2012
DOI: 10.1037/13091-004

ABSTRACT

Moral psychology is both old and new. Old because moral thought has long been a central focus of theology and philosophy. Indeed, many of the theories that we explore today were proposed first by scholars such as Aristotle, Kant, and Hume. Young because the scientific study of morality—and, specifically, the study of what goes on in a person's head when making a moral judgment—has been a topic of serious inquiry only over the last couple of decades. Even now, it is just barely mainstream. This chapter is itself a combination of the old and the new. I am going to consider two broad questions that would have been entirely familiar to philosophers such as Aristotle, but are also the topic of considerable contemporary research and theorizing: (1) What is our natural human moral endowment? (2) To what extent are moral judgments the products of our emotions? I will have the most to say about the first question, and will review a body of empirical work that bears on it; much of this research is still in progress. The answer to the second question will be briefer and more tentative, and will draw in part upon this empirical work.

The article is here.

Friday, January 1, 2016

Why we forgive what can’t be controlled

Martin, J.W. & Cushman, F.A.
Cognition, 147, 133-143

Abstract

Volitional control matters greatly for moral judgment: Coerced agents receive less condemnation for outcomes they cause. Less well understood is the psychological basis of this effect. Control may influence perceptions of intent for the outcome that occurs or perceptions of causal role in that outcome. Here, we show that an agent who chooses to do the right thing but accidentally causes a bad outcome receives relatively more punishment than an agent who is forced to do the ‘‘right” thing but causes a bad outcome.  Thus, having good intentions ironically leads to greater condemnation. This surprising effect does not depend upon perceptions of increased intent for harm to occur, but rather upon perceptions of causal role in the obtained outcome. Further, this effect is specific to punishment: An agent who chooses to do the right thing is rated as having better moral character than a forced agent, even though they cause the same bad outcome. These results clarify how, when and why control influences moral judgment.

The article is here.

Saturday, December 19, 2015

Three Types of Moral Supervenience

By John Danaher
Philosophical Disquisitions
Originally published November 7, 2014

Here are two excerpts:

As you know, metaethics is about the ontology and epistemology of morality. Take a moral claim like “torturing innocent children for fun is wrong”. A metaethicist wants to know what, if anything, entitles us to make such a claim. On the ontological side, they want to know what is it that makes the torturing of innocent children wrong (what grounds or explains the ascription of that moral property to that event?). On the epistemological side, they wonder how it is that we come to know that the torturing of innocent children is wrong (how to we acquire moral knowledge?). Both questions are interesting — and vital to ask if you wish to develop a sensible worldview — but in discussing moral supervenience we are focused primarily on the ontological one.

(cut)

The supervenience of the moral on the non-moral is generally thought to give rise to a philosophical puzzle. JL Mackie famously argued that the if the moral truly did supervene on the non-moral, then this was metaphysically “queer”. We were owed some plausible account of why this happens. He didn’t think we had such an account, which is one reason why he was an moral error theorist. Others are less pessimistic. They think there are ways in which to account for moral supervenience.

The blog post is here.

Tuesday, October 27, 2015

Intuitive and Counterintuitive Morality

Guy Kahane
Moral Psychology and Human Agency: Philosophical  Essays on the Science of Ethics, Oxford University Press

Abstract

 Recent work in the cognitive science of morality has been taken to show that moral judgment is largely based on immediate intuitions and emotions. However, according to Greene's influential dual process model, deliberative processing not only plays a significant role in moral judgment, but also favours a distinctive type of content broadly utilitarian approach to ethics. In this chapter, I argue that this proposed tie between process and content is based on conceptual errors, and on a misinterpretation of the empirical evidence. Drawing on some of our own empirical research, I will argue so-called "utilitarian" judgments in response to trolley cases often have little to do with concern for the greater good, and may actually express antisocial tendencies. A more general lesson of my argument is that much of current empirical research in moral psychology is based on a far too narrow understanding of intuition and deliberation.

The entire book chapter is here.

Saturday, July 18, 2015

Are You Morally Modified?: The Moral Effects of Widely Used Pharmaceuticals.

Levy N, Douglas T, Kahane G, Terbeck S, Cowen PJ, Hewstone M, Savulescu J.
Philos Psychiatr Psychol. 2014 Jun 1;21(2):111-125.

Abstract

A number of concerns have been raised about the possible future use of pharmaceuticals designed to enhance cognitive, affective, and motivational processes, particularly where the aim is to produce morally better decisions or behavior. In this article, we draw attention to what is arguably a more worrying possibility: that pharmaceuticals currently in widespread therapeutic use are already having unintended effects on these processes, and thus on moral decision making and morally significant behavior. We review current evidence on the moral effects of three widely used drugs or drug types: (i) propranolol, (ii) selective serotonin reuptake inhibitors, and (iii) drugs that effect oxytocin physiology. This evidence suggests that the alterations to moral decision making and behavior caused by these agents may have important and difficult-to-evaluate consequences, at least at the population level. We argue that the moral effects of these and other widely used pharmaceuticals warrant further empirical research and ethical analysis.

The entire article is here.

Wednesday, July 15, 2015

Approach and avoidance in moral psychology: Evidence for three distinct motivational levels

James F.M. Cornwell and E. Tory Higgins
Personality and Individual Differences
Volume 86, November 2015, Pages 139–149

Abstract

During the past two decades, the science of motivation has made major advances by going beyond just the traditional division of motivation into approaching pleasure and avoiding pain. Recently, motivation has been applied to the study of human morality, distinguishing between prescriptive (approach) morality on the one hand, and proscriptive (avoidance) morality on the other, representing a significant advance in the field. There has been some tendency, however, to subsume all moral motives under those corresponding to approach and avoidance within morality, as if one could proceed with a “one size fits all” perspective. In this paper, we argue for the unique importance of each of three different moral motive distinctions, and provide empirical evidence to support their distinctiveness. The usefulness of making these distinctions for the case of moral and ethical motivation is discussed.

Highlights

• We investigate the relations among three motivational constructs.
• We find that the three constructs are statistically independent.
• We find independent relations between the constructs and moral emotions.
• We find independent relations between the constructs and personal values.

The entire article is here.