Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Psychology. Show all posts
Showing posts with label Moral Psychology. Show all posts

Saturday, May 9, 2020

Naïve Normativity: The Social Foundation of Moral Cognition

Kristin Andrews
Journal of the American Philosophical Association
Volume 6, Issue 1
January 2020 , pp. 36-56

Abstract

To answer tantalizing questions such as whether animals are moral or how morality evolved, I propose starting with a somewhat less fraught question: do animals have normative cognition? Recent psychological research suggests that normative thinking, or ought-thought, begins early in human development. Recent philosophical research suggests that folk psychology is grounded in normative thought. Recent primatology research finds evidence of sophisticated cultural and social learning capacities in great apes. Drawing on these three literatures, I argue that the human variety of social cognition and moral cognition encompass the same cognitive capacities and that the nonhuman great apes may also be normative beings. To make this argument, I develop an account of animal social norms that shares key properties with Cristina Bicchieri's account of social norms but which lowers the cognitive requirements for having a social norm. I propose a set of four early developing prerequisites implicated in social cognition that make up what I call naïve normativity: the ability to identify agents, sensitivity to in-group/out-group differences, the capacity for social learning of group traditions, and responsiveness to appropriateness. I review the ape cognition literature and present preliminary empirical evidence supporting the existence of social norms and naïve normativity in great apes. While there is more empirical work to be done, I hope to have offered a framework for studying normativity in other species, and I conclude that we should be open to the possibility that normative cognition is yet another ancient cognitive endowment that is not human-unique.

The info is here.

Tuesday, March 24, 2020

The effectiveness of moral messages on public health behavioral intentions during the COVID-19 pandemic

J. Everett, C. Colombatta, & others
PsyArXiv PrePrints
Originally posted 20 March 20

Abstrac
With the COVID-19 pandemic threatening millions of lives, changing our behaviors to prevent the spread of the disease is a moral imperative. Here, we investigated the effectiveness of messages inspired by three major moral traditions on public health behavioral intentions. A sample of US participants representative for age, sex and race/ethnicity (N=1032) viewed messages from either a leader or citizen containing deontological, virtue-based, utilitarian, or non-moral justifications for adopting social distancing behaviors during the COVID-19 pandemic. We measured the messages’ effects on participants’ self-reported intentions to wash hands, avoid social gatherings, self-isolate, and share health messages, as well as their beliefs about others’ intentions, impressions of the messenger’s morality and trustworthiness, and beliefs about personal control and responsibility for preventing the spread of disease. Consistent with our pre-registered predictions, deontological messages had modest effects across several measures of behavioral intentions, second-order beliefs, and impressions of the messenger, while virtue-based messages had modest effects on personal responsibility for preventing the spread. These effects were observed for messages from leaders and citizens alike. Our findings are at odds with participants’ own beliefs about moral persuasion: a majority of participants predicted the utilitarian message would be most effective. We caution that these effects are modest in size, likely due to ceiling effects on our measures of behavioral intentions and strong heterogeneity across all dependent measures along several demographic dimensions including age, self-identified gender, self-identified race, political conservatism, and religiosity. Although the utilitarian message was the least effective among those tested, individual differences in one key dimension of utilitarianism—impartial concern for the greater good—were strongly and positively associated with public health intentions and beliefs. Overall, our preliminary results suggest that public health messaging focused on duties and responsibilities toward family, friends and fellow citizens will be most effective in slowing the spread of COVID-19 in the US. Ongoing work is investigating whether deontological persuasion generalizes across different populations, what aspects of deontological messages drive their persuasive effects, and how such messages can be most effectively delivered across global populations.

The research is here.

Sunday, March 1, 2020

The Dark Side of Morality: Group Polarization and Moral Epistemology

Marcus Arvan
PsyArXiv
Originally published on 12 Dec 19

Abstract

This paper shows that philosophers and laypeople commonly conceptualize moral truths as discoverable through intuition, argument, or some other process. It then argues that three empirically-supported theories of group polarization suggest that this Discovery Model of morality likely plays a substantial role in causing polarization—a phenomenon known to produce a wide variety of disturbing social effects, including increasing prejudice, selfishness, divisiveness, mistrust, and violence. This paper then uses the same three empirical theories to argue that an alternative Negotiation Model of morality—according to which moral truths are instead created by negotiation—promises to not only mitigate polarization but perhaps even foster its opposite: a progressive willingness to “work across the aisle to settle contentious moral issues cooperatively. Finally, I outline avenues for further empirical and philosophical research.

Conclusion

Laypeople  and  philosophers  tend  to  treat  moral  truths  as discoverable  through  intuition, argument,  or  other cognitive  or  affective  process. However,  we  have  seen that  there  are strong theoretical   reasons—based   on   three   empirically-supported   theories   of   group polarization—to believe this Discovery Model of morality is a likely cause of polarization: a social-psychological phenomenon known to have a wide variety of disturbing social effects. We then saw that there are complementary theoretical reasons to believe that an alternative, Negotiation  Model of  morality  might  not  only  mitigate  polarization  but  actually  foster  its opposite: an increasing willingness for to work together to arrive at compromises on moral controversies. While   this   paper   does   not prove   the   existence   of   the   hypothesized relationships   between   the   Discovery   Model,  Negotiation   Model,   and   polarization,   it demonstrates that there are ample theoretical reasons to believe that such relationships are likely and worthy of further empirical and philosophical research.

Wednesday, February 12, 2020

Empirical Work in Moral Psychology

Joshua May
Routledge Encyclopedia of Philosophy
Taylor and Francis
Originally published in 2017

Abstract

How do we form our moral judgments, and how do they influence behaviour? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will, as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as an additional tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The info is here.

Monday, February 3, 2020

Explaining moral behavior: A minimal moral model.

Osman, M., & Wiegmann, A.
Experimental Psychology (2017)
64(2), 68-81.

Abstract

In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

From the Implications section:

If instead moral behavior is viewed as a domain-general process, the findings can easily be accounted for based on existing literature from judgment and decision-making research such as Tversky’s (1969) work on intransitive preferences.

The same benefits of this research approach extend to the moral philosophy domain. As we described at the beginning of the paper, empirical research can inform philosophers as to which moral intuitions are likely to be biased. If moral judgments, decisions, and behavior can be captured by well-developed domain-general theories then our theoretical and empirical resources for gaining  knowledge about moral intuitions would be much greater, as compared to the recourses provided by moral psychology alone.

The paper can be downloaded here.

Sunday, February 2, 2020

Empirical Work in Moral Psychology

 Joshua May
Routledge Encyclopedia of Philosophy

How do we form our moral judgments, and how do they influence behavior? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning, and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will; as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave.

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as a tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The paper can be downloaded here.

Tuesday, September 3, 2019

Moral Obstinacy in Political Negotiations

Andrew Delton, Peter DeScioli, and
Timothy Ryan

Abstract:

Research in behavioral economics finds that moral considerations bear on the offers that people make and accept in negotiations. This finding is relevant for political negotiations, wherein moral concerns are manifold. However, behavioral economics has yet to incorporate a major theme from moral psychology: people differ, sometimes immensely, in which issues they perceive to be a matter of morality. We review research about the measurement and characteristics of moral convictions. We hypothesize that moral conviction leads to uncompromising bargaining strategies and failed negotiations. We test this theory in three incentivized experiments in which participants bargain over political policies with real payoffs at stake. We find that participants’ moral convictions are linked with aggressive bargaining strategies, which helps explain why it is harder to forge bargains on some political issues than others. We also find substantial asymmetries between liberals and conservatives in the intensity of their moral convictions about different issues.

Part of the Conclusion:

Looking across our studies, we see substantial convergence in how attitude facets relate to compromise. Specifically, both attitude extremity and moral conviction independently and consistently predicted tough bargaining strategies. In contrast, personal relevance did not affect bargaining, and importance had inconsistent effects. We suggest that the effect of extremity is to be expected because extremity is a sort of omnibus index of attitude strength (Visser et al. 2006, 56).  However, we think that the persistent effect of moral conviction merits further attention, since moral conviction is a less studied dimension of political attitudes. Moreover, the finding that moral conviction predicted resistance to compromise aligns with moral psychology research, which finds that people’s moral judgments are shaped by strong prohibitions and obligations that resist cost benefit considerations (e.g., Cushman 2013; Haidt 2012; Tetlock et al. 2000).

The research is here.

Thursday, August 8, 2019

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Wednesday, August 7, 2019

Veil-of-Ignorance Reasoning Favors the Greater Good

Karen Huang Joshua D. Greene Max Bazerman
PsyArXiv
Originally posted July 2, 2019

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

The research is here.

Sunday, July 28, 2019

Community Standards of Deception

Levine, Emma
Booth School of Business
(June 17, 2019).
Available at SSRN: https://ssrn.com/abstract=3405538

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, I demonstrate that deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm. I identify nine implicit rules – pertaining to the targets of deception and the topic and timing of a conversation – that specify the systematic circumstances in which deception is perceived to cause unnecessary harm, and I document the causal effect of each implicit rule on the endorsement of deception. This research provides insight into when and why people value honesty, and paves the way for future research on when and why people embrace deception.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Tuesday, February 19, 2019

How Our Attitude Influences Our Sense Of Morality

Konrad Bocian
Science Trend
Originally posted January 18, 2019

Here is an excerpt:

People think that their moral judgment is as rational and objective as scientific statements, but science does not confirm that belief. Within the two last decades, scholars interested in moral psychology discovered that people produce moral judgments based on fast and automatic intuitions than rational and controlled reasoning. For example, moral cognition research showed that moral judgments arise in approximately 250 milliseconds, and even then we are not able to explain them. Developmental psychologists proved that at already the age of 3 months, babies who do not have any lingual skills can distinguish a good protagonist (a helping one) from a bad one (a hindering one). But this does not mean that peoples’ moral judgments are based solely on intuitions. We can use deliberative processes when conditions are favorable – when we are both motivated to engage in and capable of conscious responding.

When we imagine how we would morally judge other people in a specific situation, we refer to actual rules and norms. If the laws are violated, the act itself is immoral. But we forget that intuitive reasoning also plays a role in forming a moral judgment. It is easy to condemn the librarian when our interest is involved on paper, but the whole picture changes when real money is on the table. We have known that rule for a very long time, but we still forget to use it when we predict our moral judgments.

Based on previous research on the intuitive nature of moral judgment, we decided to test how far our attitudes can impact our perception of morality. In our daily life, we meet a lot of people who are to some degree familiar, and we either have a positive or negative attitude toward these people.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Wednesday, October 3, 2018

Moral Reasoning

Richardson, Henry S.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Here are two brief excerpts:

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

(cut)

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas. Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  1. He ought to do A.
  2. He ought to do B.
  3. He cannot do both A and B.
  4. (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

The entry is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Sunday, April 29, 2018

Who Am I? The Role of Moral Beliefs in Children’s and Adults’ Understanding of Identity

Larisa Heiphetz, Nina Strohminger, Susan A. Gelman, and Liane L. Young
Forthcoming: Journal of Experimental and Social Psychology

Abstract

Adults report that moral characteristics—particularly widely shared moral beliefs—are central to identity. This perception appears driven by the view that changes to widely shared moral beliefs would alter friendships and that this change in social relationships would, in turn, alter an individual’s personal identity. Because reasoning about identity changes substantially during adolescence, the current work tested pre- and post-adolescents to reveal the role that such changes could play in moral cognition. Experiment 1 showed that 8- to 10-year-olds, like adults, judged that people would change more after changes to their widely shared moral beliefs (e.g., whether hitting is wrong) than after changes to controversial moral beliefs (e.g., whether telling prosocial lies is wrong). Following up on this basic effect, a second experiment examined whether participants regard all changes to widely shared moral beliefs as equally impactful. Adults, but not children, reported that individuals would change more if their good moral beliefs (e.g., it is not okay to hit) transformed into bad moral beliefs (e.g., it is okay to hit) than if the opposite change occurred. This difference in adults was mediated by perceptions of how much changes to each type of belief would alter friendships. We discuss implications for moral judgment and social cognitive development.

The research is here.

Saturday, March 31, 2018

Individual Moral Development and Moral Progress

Schinkel, A. & de Ruyter, D.J.
Ethical Theory and Moral Practice (2017) 20: 121.
https://doi.org/10.1007/s10677-016-9741-6

Abstract

At first glance, one of the most obvious places to look for moral progress is in individuals, in particular in moral development from childhood to adulthood. In fact, that moral progress is possible is a foundational assumption of moral education. Beyond the general agreement that moral progress is not only possible but even a common feature of human development things become blurry, however. For what do we mean by ‘progress’? And what constitutes moral progress? Does the idea of individual moral progress presuppose a predetermined end or goal of moral education and development, or not? In this article we analyze the concept of moral progress to shed light on the psychology of moral development and vice versa; these analyses are found to be mutually supportive. We suggest that: moral progress should be conceived of as development that is evaluated positively on the basis of relatively stable moral criteria that are the fruit and the subject of an ongoing conversation; moral progress does not imply the idea of an end-state; individual moral progress is best conceived of as the development of various components of moral functioning and their robust integration in a person’s identity; both children and adults can progress morally - even though we would probably not speak in terms of progress in the case of children - but adults’ moral progress is both more hard-won and to a greater extent a personal project rather than a collective effort.

Download the paper here.

Monday, December 18, 2017

Is Pulling the Lever Sexy? Deontology as a Downstream Cue to Long-Term Mate Quality

Mitch Brown and Donald Sacco
Journal of Social and Personal Relationships
November 2017

Abstract

Deontological and utilitarian moral decisions have unique communicative functions within the context of group living. Deontology more strongly communicates prosocial intentions, fostering greater perceptions of trust and desirability in general affiliative contexts. This general trustworthiness may extend to perceptions of fidelity in romantic relationships, leading to perceptions of deontological persons as better long-term mates, relative to utilitarians. In two studies, participants indicated desirability of both deontologists and utilitarians in long- and short-term mating contexts. In Study 1 (n = 102), women perceived a deontological man as more interested in long-term bonds, more desirable for long-term mating, and less prone to infidelity, relative to a utilitarian man. However, utilitarian men were undesirable as short-term mates. Study 2 (n = 112) had both men and women rate opposite sex targets’ desirability after learning of their moral decisions in a trolley problem. We replicated women’s preference for deontological men as long-term mates. Interestingly, both men and women reporting personal deontological motives were particularly sensitive to deontology communicating long-term desirability and fidelity, which could be a product of the general affiliative signal from deontology. Thus, one’s moral basis for decision-making, particularly deontologically-motivated moral decisions, may communicate traits valuable in long-term mating contexts.

The research is here.

Thursday, November 16, 2017

Moral Hard-Wiring and Moral Enhancement

Introduction

In a series of papers (Persson & Savulescu 2008; 2010; 2011a; 2012a; 2013; 2014a) and book (Persson & Savulescu 2012b), we have argued that there is an urgent need to pursue research into the possibility of moral enhancement by biomedical means – e.g. by pharmaceuticals, non-invasive brain stimulation, genetic modification or other means directly modifying biology. The present time brings existential threats which human moral psychology, with its cognitive and moral limitations and biases, is unfit to address.  Exponentially increasing, widely accessible technological advance and rapid globalisation create threats of intentional misuse (e.g. biological or nuclear terrorism) and global collective action problems, such as the economic inequality between developed and developing countries and anthropogenic climate change, which human psychology is not set up to address. We have hypothesized that these limitations are the result of the evolutionary function of morality being to maximize the fitness of small cooperative groups competing for resources. Because these limitations of human moral psychology pose significant obstacles to coping with the current moral mega-problems, we argued that biomedical modification of human moral psychology may be necessary.  We have not argued that biomedical moral enhancement would be a single “magic
bullet” but rather that it could play a role in a comprehensive approach which also features cultural and social measures.

The paper is here.