Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Cognition. Show all posts
Showing posts with label Moral Cognition. Show all posts

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Sunday, January 22, 2023

No Peace for the Wicked? Immorality is (Usually) Thought to Disrupt Intrapersonal Harmony

Prinzing, M., & Fredrickson, B.
(2022, November 28). 
https://doi.org/10.31234/osf.io/ug8tk

Abstract

Past research has found that people who behave morally are seen as happier than people who behave immorally—even when their psychological states are described identically. This has led researchers to conclude that the ordinary concept of happiness includes a role for moral factors as well as psychological states. In three experiments (total N = 1,185), we found similar effects of moral evaluations on attributions of a range of psychological states, including positive attitudes towards one’s life and activities (Study 1), pleasant and unpleasant emotions in general (Studies 2-3) and life-satisfaction (Studies 2-3). This suggests that moral evaluations have pervasive effects on the psychological states that people attribute to others. We propose that this is because immorality is seen as disrupting intrapersonal harmony. That is, immoral people are thought to be less happy because they are thought to experience less positive psychological states, and this occurs when and because they are seen as being internally conflicted. Supporting this explanation, we found that immoral agents are seen as more internally conflicted than moral agents (Study 2), and that the effect of moral evaluations on positive psychological state attributions disappears when agents are described as being at peace with themselves (Study 3).

Implications and Conclusion

We set out to better understand why moral evaluations affect happiness judgments.  One possibility is that, when people judge whether another person is happy, they are partly assessing whether that person experiences positive psychological states and partly assessing whether the person is living a good life. If that were so, then people would not consider immoral agents entirely  happy—even if they recognized that the agents experience overwhelmingly positive psychological states.  That is, morality does not affect the experiential states the people attribute to others—it affects whether they consider such states happiness.  Yet, this research suggests a more striking conclusion.  Our results indicate that people attribute experiential states, like pleasant emotions and satisfaction, differently depending on their moral judgments.  Moreover, we found that this occurs when and because immorality is seen as a source of intrapersonal conflict. When people do not see immoral agents as more conflicted than moral agents, they do not attribute less happiness (or less positive emotion or less life-satisfaction) to those immoral agents. On the lay view, immorality typically means betraying one’s true self, disrupting one’s inner harmony, and leading to at best an incomplete form of happiness.  However, this is not always the case.

Hence, the ordinary concept of happiness appears to be similar to ancient Greek conceptions  of eudaemonia (Aristotle,  2000;  Plato,  2004).  Roughly  speaking, Plato believed that eudaemonia consists in a kind of intrapersonal harmony.  He also argued that moral virtue was necessary for such harmony. Our findings suggest that 21st century Americans similarly see happiness as involving a kind of intrapersonal harmony. However, they don’t seem to think that harmony requires morality. Although immorality is usually a source of intrapersonal conflict, someone who behaves immorally can be happy so long as they can still find peace with themselves.  Hence, according to folk wisdom, there may be very little peace for the wicked. But so long as they find it, there can be happiness too.

Thursday, January 12, 2023

Why Moral Judgments Affect Happiness Attributions: Testing the Fittingness and True Self Hypotheses

Prinzing, M., Knobe, J., & Earp, B. D.
(2022, November 25). 
https://doi.org/10.31234/osf.io/5dkp3

Abstract

Past research has found that people attribute less happiness to morally bad agents than to morally good agents. One proposed explanation for this effect is that attributions of emotions like happiness are influenced by judgments about their fittingness (i.e., whether they are merited). Another is that emotion attributions are influenced by judgments about whether they reflect the agent’s true self (i.e., who the agent is “deep down”). These two kinds of judgments are highly entangled for happiness, but perhaps less so for other emotions. Accordingly, we tested these hypotheses by examining attributions of happiness, love, sadness, and hatred. In Study 1, manipulating the fittingness of an agent’s emotion affected emotion attributions only when it also affected true self judgments. In Study 2, manipulating whether an agent’s emotion reflects his true self affected attributions of all emotions, regardless of the effects on fittingness judgments. Studies 3-4 examined attributions of “true happiness,” “true love,” “true sadness,” and “true hatred.” The fittingness manipulation again influenced attributions of “true” emotions only where it also affected true self judgments, whereas the true self manipulation affected attributions of all four “true” emotions. Overall, these results cast serious doubt on the fittingness hypothesis and offer some support for the true self hypothesis, which could be developed further in future work.

(cut)

What are “True” Emotions?

Past  theoretical work on “true” emotions, such as true love and true happiness, has centered on the idea that emotions are true when they are fitting (De Sousa, 2002; Hamlyn, 1989; Salmela,  2006;  Solomon,  2002).  Yet  the  results  of  Studies  3-4  indicate  that  this  is  not  what ordinary people think. We found that manipulating the fittingness of happiness and love affects their perceived trueness, but not so for sadness or hatred. By contrast, the true self manipulation affects the perceived trueness of all four emotions.These findings provide at least some initial support for a very different hypothesis about what people mean when they say that an emotion is “true,” namely, that an emotion is seen as “true” to the extent that it is seen as related in a certain kind of way to the agent’s true self.

Further  research  could  continue  to  explore  this  hypothesis.  One  potential  source  of evidence would be patterns in people’s judgments about whether it even makes sense to use the word  “true”  to  describe  a  particular  emotion.  In other  work  (Earp  et  al.,  2022),  we  asked participants about the degree to which it makes sense to call various emotions “true.” Happiness and love had the highest average scores, with most people thinking it makes perfect sense to say “true happiness” or “true love.” Grumpiness and lust had the lowest averages, with most people thinking that it does not make any sense to say “true grumpiness” or “true lust.” A natural further question would be whether the true self hypothesis can explain this pattern. Is there a general tendency such that the emotions that can appropriately be called “true” are also the emotions that people think can be rooted in a person’s true self? 

As another strategy for better understanding the way people apply the word “true” with emotion words, we might turn to research on apparently similar phrases that are not concerned with emotions in particular: for example, “true scientist,” “true work of art,” or “true friend” (Del Pinal, 2018; Knobe et al., 2013; Leslie, 2015; Reuter, 2019). It’s possible that, although “true” is also used in these cases, it means something quite different and unrelated to what it means when applied to emotions. However, it’s also possible that it is related, and that insight could therefore be gained by investigating connections with these seemingly distant concepts.

Wednesday, August 18, 2021

The Shape of Blame: How statistical norms impact judgments of blame and praise

Bostyn, D. H., & Knobe, J. (2020, April 24). 
https://doi.org/10.31234/osf.io/2hca8

Abstract

For many types of behaviors, whether a specific instance of that behavior is either blame or praiseworthy depends on how much of the behavior is done or how people go about doing it. For instance, for a behavior such as “replying quickly to emails”, whether a specific reply is blame or praiseworthy will depend on the timeliness of that reply. Such behaviors lie on a continuum in which part of the continuum is praiseworthy (replying quickly) and another part of the continuum is blameworthy (replying late). As praise shifts towards blame along such behavioral continua, the resulting blame-praise curve must have a specific shape. A number of questions therefore arise. What determines the shape of that curve? And what determines “the neutral point”, i.e., the point along a behavioral continuum at which people neither blame nor praise? Seven studies explore these issues, focusing specifically on the impact of statistical information, and provide evidence for a hypothesis we call the “asymmetric frequency hypothesis.”

From the Discussion

Asymmetric frequency and moral cognition

The results obtained here appear to support the asymmetric frequency hypothesis. So far, we have summarized this hypothesis as “People tend perceive frequent behaviors as not blameworthy.” But how exactly is this hypothesis best understood?Importantly, the asymmetric frequency effect does not imply that whenever a behavior becomes more frequent, the associated moral judgment will shift towards the neutral. Behaviors that are considered to be praiseworthy do not appear to become more neutral simply because they become more frequent. The effect of frequency only appears to occur when a behavior is blameworthy, which is why we dubbed it an asymmetric effect.An enlightening historical example in this regard is perhaps the “gay revolution” (Faderman, 2015). As knowledge of the rate of homosexuality has spread across society and people have become more familiar with homosexuality within their own communities, moral norms surrounding homosexuality have shifted from hostility to increasing acceptance (Gallup 2019). Crucially, however, those who already lauded others for having a loving homosexual relation did not shift their judgment towards neutral indifference over the same time period. While frequency mitigates blameworthiness, it does not cause a general shift towards neutrality. Even when everyone does the right thing, it does not lose its moral shine.

Thursday, September 17, 2020

Sensitivity to Ingroup and Outgroup Norms in the Association Between Commonality and Morality

M. R.Goldring & L. Heiphetz
Journal of Experimental Social Psychology
Volume 91, November 2020, 104025

Abstract

Emerging research suggests that people infer that common behaviors are moral and vice versa.
The studies presented here investigated the role of group membership in inferences regarding
commonality and morality. In Study 1, participants expected a target character to infer that
behaviors that were common among their ingroup were particularly moral. However, the extent
to which behaviors were common among the target character’s outgroup did not influence
expectations regarding perceptions of morality. Study 2 reversed this test, finding that
participants expected a target character to infer that behaviors considered moral among their
ingroup were particularly common, regardless of how moral their outgroup perceived those
behaviors to be. While Studies 1-2 relied on fictitious behaviors performed by novel groups,
Studies 3-4 generalized these results to health behaviors performed by members of different
racial groups. When answering from another person’s perspective (Study 3) and from their own
perspective (Study 4), participants reported that the more common behaviors were among their
ingroup, the more moral those behaviors were. This effect was significantly weaker for
perceptions regarding outgroup norms, although outgroup norms did exert some effect in this
real-world context. Taken together, these results highlight the complex integration of ingroup
and outgroup norms in socio-moral cognition.

A pdf of the article can be found here.

In sum: Actions that are common among the ingroup are seen as particularly moral.  But actions that are common among the outgroup have little bearing on our judgments of morality.

Tuesday, June 23, 2020

The Neuroscience of Moral Judgment: Empirical and Philosophical Developments

J. May, C. I. Workman, J. Haas, & H. Han
Forthcoming in Neuroscience and Philosophy,
eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press).

Abstract

We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of moral judgment in adulthood and to some extent in childhood and adolescence. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.

From the Conclusion

6.1 Reason vs. Emotion in Ethics

The dichotomy between reason and emotion stretches back to antiquity. But an improved understanding of the brain has, arguably more than psychological science, questioned the dichotomy (Huebner 2015; Woodward 2016). Brain areas associated with prototypical emotions, such as vmPFC and amygdala, are also necessary for complex learning and inference, even if largely automatic and unconscious. Even psychopaths, often painted as the archetype of emotionless moral monsters, have serious deficits in learning and inference. Moreover, even if our various moral judgments about trolley problems, harmless taboo violations, and the like are often automatic, they are nonetheless acquired through sophisticated learning mechanisms that are responsive to morally-relevant reasons (Railton 2017; Stanley et al. 2019). Indeed, normal moral judgment often involves gut feelings being attuned to relevant experience and made consistent with our web of moral beliefs (May & Kumar 2018).

The paper can be downloaded here.

Sunday, March 1, 2020

The Dark Side of Morality: Group Polarization and Moral Epistemology

Marcus Arvan
PsyArXiv
Originally published on 12 Dec 19

Abstract

This paper shows that philosophers and laypeople commonly conceptualize moral truths as discoverable through intuition, argument, or some other process. It then argues that three empirically-supported theories of group polarization suggest that this Discovery Model of morality likely plays a substantial role in causing polarization—a phenomenon known to produce a wide variety of disturbing social effects, including increasing prejudice, selfishness, divisiveness, mistrust, and violence. This paper then uses the same three empirical theories to argue that an alternative Negotiation Model of morality—according to which moral truths are instead created by negotiation—promises to not only mitigate polarization but perhaps even foster its opposite: a progressive willingness to “work across the aisle to settle contentious moral issues cooperatively. Finally, I outline avenues for further empirical and philosophical research.

Conclusion

Laypeople  and  philosophers  tend  to  treat  moral  truths  as discoverable  through  intuition, argument,  or  other cognitive  or  affective  process. However,  we  have  seen that  there  are strong theoretical   reasons—based   on   three   empirically-supported   theories   of   group polarization—to believe this Discovery Model of morality is a likely cause of polarization: a social-psychological phenomenon known to have a wide variety of disturbing social effects. We then saw that there are complementary theoretical reasons to believe that an alternative, Negotiation  Model of  morality  might  not  only  mitigate  polarization  but  actually  foster  its opposite: an increasing willingness for to work together to arrive at compromises on moral controversies. While   this   paper   does   not prove   the   existence   of   the   hypothesized relationships   between   the   Discovery   Model,  Negotiation   Model,   and   polarization,   it demonstrates that there are ample theoretical reasons to believe that such relationships are likely and worthy of further empirical and philosophical research.

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.

Monday, July 22, 2019

Understanding the process of moralization: How eating meat becomes a moral issue

Feinberg, M., Kovacheff, C., Teper, R., & Inbar, Y. (2019).
Journal of Personality and Social Psychology, 117(1), 50-72.

Abstract

A large literature demonstrates that moral convictions guide many of our thoughts, behaviors, and social interactions. Yet, we know little about how these moral convictions come to exist. In the present research we explore moralization—the process by which something that was morally neutral takes on moral properties—examining what factors facilitate and deter it. In 3 longitudinal studies participants were presented with morally evocative stimuli about why eating meat should be viewed as a moral issue. Study 1 tracked students over a semester as they took a university course that highlighted the suffering animals endure because of human meat consumption. In Studies 2 and 3 participants took part in a mini-course we developed which presented evocative videos aimed at inducing moralization. In all 3 studies, we assessed participants’ beliefs, attitudes, emotions, and cognitions at multiple time points to track moral changes and potential factors responsible for such changes. A variety of factors, both cognitive and affective, predicted participants’ moralization or lack thereof. Model testing further pointed to two primary conduits of moralization: the experience of moral emotions (e.g., disgust, guilt) felt when contemplating the issue, and moral piggybacking (connecting the issue at hand with one’s existing fundamental moral principles). Moreover, we found individual differences, such as how much one holds their morality as central to their identity, also predicted the moralization process. We discuss the broad theoretical and applied implications of our results.

A pdf can be viewed here.

Tuesday, June 19, 2018

Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas

Dries H. Bostyn, Sybren Sevenhant, and Arne Roets
Psychological Science 
First Published May 9, 2018

Abstract

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

The research is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Sunday, April 29, 2018

Who Am I? The Role of Moral Beliefs in Children’s and Adults’ Understanding of Identity

Larisa Heiphetz, Nina Strohminger, Susan A. Gelman, and Liane L. Young
Forthcoming: Journal of Experimental and Social Psychology

Abstract

Adults report that moral characteristics—particularly widely shared moral beliefs—are central to identity. This perception appears driven by the view that changes to widely shared moral beliefs would alter friendships and that this change in social relationships would, in turn, alter an individual’s personal identity. Because reasoning about identity changes substantially during adolescence, the current work tested pre- and post-adolescents to reveal the role that such changes could play in moral cognition. Experiment 1 showed that 8- to 10-year-olds, like adults, judged that people would change more after changes to their widely shared moral beliefs (e.g., whether hitting is wrong) than after changes to controversial moral beliefs (e.g., whether telling prosocial lies is wrong). Following up on this basic effect, a second experiment examined whether participants regard all changes to widely shared moral beliefs as equally impactful. Adults, but not children, reported that individuals would change more if their good moral beliefs (e.g., it is not okay to hit) transformed into bad moral beliefs (e.g., it is okay to hit) than if the opposite change occurred. This difference in adults was mediated by perceptions of how much changes to each type of belief would alter friendships. We discuss implications for moral judgment and social cognitive development.

The research is here.

Wednesday, June 7, 2017

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, November 3, 2015

The neuroscience of moral cognition: from dual processes to dynamic systems

Jay J Van Bavel, Oriel FeldmanHall, Peter Mende-Siedlecki
Current Opinion in Psychology
Volume 6, December 2015, Pages 167–172

Prominent theories of morality have integrated philosophy with psychology and biology. Although this approach has been highly generative, we argue that it does not fully capture the rich and dynamic nature of moral cognition. We review research from the dual-process tradition, in which moral intuitions are automatically elicited and reasoning is subsequently deployed to correct these initial intuitions. We then describe how the computations underlying moral cognition are diverse and widely distributed throughout the brain. Finally, we illustrate how social context modulates these computations, recruiting different systems for real (vs. hypothetical) moral judgments, examining the dynamic process by which moral judgments are updated. In sum, we advocate for a shift from dual-process to dynamic system models of moral cognition.

The entire article is here.

Saturday, October 3, 2015

Neural Foundation of Morality

Roland Zahn, Ricardo de Oliveira-Souza, & Jorge Moll
International Encyclopedia of the Social & Behavioral Sciences (Second Edition)
2015, Pages 606–618

Moral behavior is one of the most sophisticated human abilities. Many social species behave altruistically toward their kin, but humans are unique in their ability to serve complex and changing societal needs. Cognitive neuroscience has started to elucidate specific brain mechanisms underpinning moral behavior, emotion, and motivation, emphasizing that these ingredients are also germane to human biology, rather than pure societal artifacts. The brain is where psychosocial learning and biology meet to produce the rich individual variability in moral behavior. This article discusses how cognitive neuroscience improves the understanding of this variability and associated suffering in neuropsychiatric conditions.

The entire article is here.

Wednesday, December 24, 2014

What do Philosophers of Mind Actually do: Some Quantitative Data

By Joshua Knobe
The Brains Blog
Originally published December 5, 2014

There seems to be a widely shared sense these days that the philosophical study of mind has been undergoing some pretty dramatic changes. Back in the twentieth century, the field was dominated by a very specific sort of research program, but it seems like less and less work is being done within that traditional program, while there is an ever greater amount of work pursuing issues that have a completely different sort of character.

To get a better sense for precisely how the field has changed, I thought it might be helpful to collect some quantitative data. Specifically, I compared a sample of highly cited papers from the past five years (2009-2013) with a sample of highly cited papers from a period in the twentieth century (1960-1999). You can find all of the nitty gritty details in this forthcoming paper, but the basic results are pretty easy to summarize.

The entire blog post is here.

Friday, August 15, 2014

Moral judgement in adolescents: Age differences in applying and justifying three principles of harm

Paul C. Stey, Daniel Lapsley & Mary O. McKeever
European Journal of Developmental Psychology
Volume 10, Issue 2, 2013
DOI:10.1080/17405629.2013.765798

Abstract

This study investigated the application and justification of three principles of harm in a cross-sectional sample of adolescents in order to test recent theories concerning the source of intuitive moral judgements. Participants were 46 early (M age = 14.8 years) and 40 late adolescents (M age = 17.8 years). Participants rated the permissibility of various ethical dilemmas, and provided justifications for their judgements. Results indicated participants aligned their judgements with the three principles of harm, but had difficulty explaining their reasoning. Furthermore, although age groups were consistent in the application of the principles of harm, age differences emerged in their justifications. These differences were partly explained by differences in language ability. Additionally, participants who used emotional language in their justifications demonstrated a characteristically deontological pattern of moral judgement on certain dilemmas. We conclude adolescents in this age range apply the principles of harm but that the ability to explain their judgements is still developing.

The entire article is here.

Tuesday, July 15, 2014

Moral bioenhancement: a neuroscientific perspective

By Molly Crockett
J Med Ethics 2014;40:370-371
doi:10.1136/medethics-2012-101096

Can advances in neuroscience be harnessed to enhance human moral capacities? And if so, should they? De Grazia explores these questions in ‘Moral Enhancement, Freedom, and What We (Should) Value in Moral Behaviour’.1 Here, I offer a neuroscientist's perspective on the state of the art of moral bioenhancement, and highlight some of the practical challenges facing the development of moral bioenhancement technologies.

The science of moral bioenhancement is in its infancy. Laboratory studies of human morality usually employ highly simplified models aimed at measuring just one facet of a cognitive process that is relevant for morality. These studies have certainly deepened our understanding of the nature of moral behaviour, but it is important to avoid overstating the conclusions of any single study.

The entire article is here.

Thursday, July 10, 2014

The Tragedy of Moral Licensing

A non-replication that threatens the public trust in psychology

By Rolf Degen
Google+ page
Shared publicly on May 20, 2014

Moral licensing is one of the most influential psychological effects discovered in the last decade. It refers to our increased tendency to act immorally if we have already displayed our moral righteousness. In essence, it means, that after you have done something nice, you think you have the license to do something not so nice. The effect was immediately picked up by all new psychological textbooks, portrayed repeatedly in the media, and it even got its own Wikipedia page (Do we have to take that one down?).

The entire Google+ essay is here.

Friday, November 8, 2013

Do Emotions Play a Constitutive Role in Moral Cognition?

By Bryce Huebner
Georgetown University
February 2013

Behavioral experiments have revealed that the presence of an emotion-eliciting stimulus can affect the severity of a person's moral judgments, while imaging experiments have revealed that moral judgments evoke increased activity in brain regions classically associated with emotion, and studies using patient populations have confirmed that damage to these areas has a significant impact on the ability to make moral judgments. To many, these data seem to suggest that emotions may play a robustly causal or perhaps even a constitutive role in moral cognition (Cushman, Young, & Greene 2010; Greene et al. 2001, 2004; Nichols 2002, 2004; Paxton & Greene 2010; Plakias 2013; Prinz 2007; Strohminger et al. 2011; Valdesolo & DeSteno 2006). But others have noted that the existing data are also consistent with the possibility that emotions operate outside of moral cognition, ‘gating’ off morally significant information, or ‘amplifying’ the output of distinctively moral computations (Decety & Cacioppo 2012; Huebner, Dwyer, & Huaser 2009; Mikhail 2011; Pizarro, Inbar, & Helion 2011). While it is commonly thought that this debate can be settled by collecting further data, I maintain that the theoretical foundations of moral psychology are themselves to blame for this intractable dispute, and my primary aim in this paper is to make a case for this claim.

The entire paper is here.