Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Development. Show all posts
Showing posts with label Moral Development. Show all posts

Wednesday, February 22, 2023

How and Why People Want to Be More Moral

Sun, J., Wilt, J. A., et al. (2022, October 13).


What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions. Across two large, preregistered studies (N = 1,818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change. In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.

From the General Discussion section

Self-Interest is a KeyMotivation for Moral Improvement

What motivates people to be more moral? From the perspective that the function of morality is to suppress selfishness for the benefit of others (Haidt & Kesebir, 2010; Wolf, 1982), we might expect people to believe that moral improvements would primarily benefit others (rather than themselves). By a similar logic, people should also primarily want to be more moral for the sake of others (rather than for their own sake).

Surprisingly, however, this was not overwhelmingly the case. Instead, across both studies, participants were approximately equally split between those who believed that others would benefit the most and those who believed that they themselves would benefit the most(with the exception of compassion; see Figure S2). The finding that people perceive some personal benefits to becoming more moral has been demonstrated in recent research (Sun & Berman, in prep). In light of evidence that moral people tend to be happier (Sun et al., in prep) and that the presence of moral struggles predicts symptoms of depression and anxiety (Exline et al., 2014), such beliefs might also be somewhat accurate.  However, it is unclear why people believe that becoming more moral would benefit themselves more than it would others. Speculatively, one possibility is that people can more vividly imagine the impacts of their own actions on their own well-being, whereas they are much more uncertain about how their actions would affect others—especially when the impacts might be spread across many beneficiaries.

However, it is also possible that this finding only applies to self-selected moral improvements, rather than the universe of all possible moral improvements. That is, when asked what they could do to become more moral, people might more readily think of improvements that would improve their own well-being to a greater extent than the well-being of others. But, if we were to ask people to predict who would benefit the most from various moral improvements that were selected by researchers, people may be less likely to believe that it would be themselves. Future research should systematically study people’s evaluations of how various moral improvements would impact their own and others’ well-being.

Similarly, when explicitly asked for whose sake they were most motivated to make their moral improvement, almost half of the participants admitted that they were most motivated to change for their own sake (rather than for the sake of others).  However, when predicting motivation from both the expected well-being consequences for the self and the well-being consequences for others, we found that people’s perceptions of personal well-being consequences was a significantly stronger predictor in both studies.  In other words, if anything, people are relatively more motivated to make moral improvements for their own sake than for the sake of others.  This is consistent with the findings of another study which examined people’s interest in changing a variety of moral and nonmoral traits, and showed that people are particularly interested in improving the traits that they believed would make them relatively happier (Sun & Berman, in prep). Here, it is striking that personal fulfilment remains the most important motivator of personal improvement even exclusively in the moral domain.

Thursday, November 11, 2021

Revisiting the Social Origins of Human Morality: A Constructivist Perspective on the Nature of Moral Sense-Making

Segovia-CuĂ©llar, A. 
Topoi (2021). 


A recent turn in the cognitive sciences has deepened the attention on embodied and situated dynamics for explaining different cognitive processes such as perception, emotion, and social cognition. This has fostered an extensive interest in the social and ‘intersubjective’ nature of moral behavior, especially from the perspective of enactivism. In this paper, I argue that embodied and situated perspectives, enactivism in particular, nonetheless require further improvements with regards to their analysis of the social nature of human morality. In brief, enactivist proposals still do not define what features of the social-relational context, or which kind of processes within social interactions, make an evaluation or action morally relevant or distinctive from other types of social normativity. As an alternative to this proclivity, and seeking to complement the enactive perspective, I present a definition of the process of moral sense-making and offer an empirically-based ethical distinction between different domains of social knowledge in moral development. For doing so, I take insights from the constructivist tradition in moral psychology. My objective is not to radically oppose embodied and enactive alternatives but to expand the horizon of their conceptual and empirical contributions to morality research.

From the Conclusions

To sum up, for humans to think morally in social environments it is necessary to develop a capacity to recognize morally relevant scenarios, to identify moral transgressions, to feel concerned about morally divergent issues, and to make judgments and decisions with morally relevant consequences. Our moral life involves the flexible application of moral principles since concerns about welfare, justice, and rights are sensitive and contingent on social and contextual factors. Moral motivation and reasoning are situated and embedded phenomena, and the result of a very complex developmental process.

In this paper, I have argued that embodied perspectives, enactivism included, face important challenges that result from their analysis of the social origins of human morality. My main objective has been to expand the horizon of conceptual, empirical, and descriptive implications that they need to address in the construction of a coherent ethical perspective. I have done so by exposing a constructivist approach to the social origins of human morality, taking insights from the cognitive-evolutionary tradition in moral psychology. This alternative radically eschews dichotomies to explain human moral behavior. Moreover, based on the constructivist definition of the moral domain of social knowledge, I have offered a basic notion of moral sense-making and I have called attention to the relevance of distinguishing what makes the development of moral norms different from the development of other domains of social normativity.

Sunday, November 22, 2020

The logic of universalization guides moral judgment

Levine, S., et al.
PNAS October 20, 2020 
117 (42) 26158-26169; 
first published October 2, 2020; 


To explain why an action is wrong, we sometimes say, “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that, alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.


Humans have several different ways to decide whether an action is wrong: We might ask whether it causes harm or whether it breaks a rule. Moral psychology attempts to understand the mechanisms that underlie moral judgments. Inspired by theories of “universalization” in moral philosophy, we describe a mechanism that is complementary to existing approaches, demonstrate it in both adults and children, and formalize a precise account of its cognitive mechanisms. Specifically, we show that, when making judgments in novel circumstances, people adopt moral rules that would lead to better consequences if (hypothetically) universalized. Universalization may play a key role in allowing people to construct new moral rules when confronting social dilemmas such as voting and environmental stewardship.

Tuesday, October 27, 2020

(Peer) group influence on children's prosocial and antisocial behavior

A. Misch & Y. Dunham


This study investigates the influence of moral in- vs. outgroup behavior on 5-6 and 8-9-year-olds' own moral behavior (N=296). After minimal group assignment, children in Experiment 1 observed adult ingroup or outgroup members engaging in prosocial sharing or antisocial stealing, before they themselves had the opportunity to privately donate stickers or take away stickers from others. Older children shared more than younger children, and prosocial models elicited higher sharing. Surprisingly, group membership had no effect. Experiment 2 investigated the same question using peer models. Children in the younger age group were significantly influenced by ingroup behavior, while older children were not affected by group membership. Additional measures reveal interesting insights into how moral in- and outgroup behavior affects intergroup attitudes, evaluations and choices.

From the Discussion

Thus, while results of our main measure generally support the hypothesis that children are susceptible to social influence, we found that children are not blindly conformist; rather, in contrast to previous research (Wilks et al., 2019) we found that conformity to antisocial behavior was low in general and restricted to younger children watching peer models.  Vulnerability to peer group influence in younger children has also been reported in previous studies on conformity (Haun & Tomasello, 2011; Engelmann et al., 2016) as well as research demonstrating a primacy of group interests over moral concerns (Misch et al., 2018). Thus, our study highlights the younger age group as a time in children’s development in which they seem to be particularly sensitive to peer influences, for better or worse, perhaps indicating a sort of “sensitive period” in which children are working to extract the norms embedded in peer behavior. 

Tuesday, June 23, 2020

The Neuroscience of Moral Judgment: Empirical and Philosophical Developments

J. May, C. I. Workman, J. Haas, & H. Han
Forthcoming in Neuroscience and Philosophy,
eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press).


We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of moral judgment in adulthood and to some extent in childhood and adolescence. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.

From the Conclusion

6.1 Reason vs. Emotion in Ethics

The dichotomy between reason and emotion stretches back to antiquity. But an improved understanding of the brain has, arguably more than psychological science, questioned the dichotomy (Huebner 2015; Woodward 2016). Brain areas associated with prototypical emotions, such as vmPFC and amygdala, are also necessary for complex learning and inference, even if largely automatic and unconscious. Even psychopaths, often painted as the archetype of emotionless moral monsters, have serious deficits in learning and inference. Moreover, even if our various moral judgments about trolley problems, harmless taboo violations, and the like are often automatic, they are nonetheless acquired through sophisticated learning mechanisms that are responsive to morally-relevant reasons (Railton 2017; Stanley et al. 2019). Indeed, normal moral judgment often involves gut feelings being attuned to relevant experience and made consistent with our web of moral beliefs (May & Kumar 2018).

The paper can be downloaded here.

Thursday, November 15, 2018

The Impact of Leader Moral Humility on Follower Moral Self-Efficacy and Behavior

Owens, B. P., Yam, K. C., Bednar, J. S., Mao, J., & Hart, D. W.
Journal of Applied Psychology. (2018)


This study utilizes social–cognitive theory, humble leadership theory, and the behavioral ethics literature to theoretically develop the concept of leader moral humility and its effects on followers. Specifically, we propose a theoretical model wherein leader moral humility and follower implicit theories about morality interact to predict follower moral efficacy, which in turn increases follower prosocial behavior and decreases follower unethical behavior. We furthermore suggest that these effects are strongest when followers hold an incremental implicit theory of morality (i.e., believing that one’s morality is malleable). We test and find support for our theoretical model using two multiwave studies with Eastern (Study 1) and Western (Study 2) samples. Furthermore, we demonstrate that leader moral humility predicts follower moral efficacy and moral behaviors above and beyond the effects of ethical leadership and leader general humility.

Here is the conclusion:

We introduced the construct of leader moral humility and theorized its effects on followers. Two studies with samples from both Eastern and Western cultures provided empirical support that leader moral humility enhances followers’ moral self-efficacy, which in turn leads to increased prosocial behavior and decreased unethical behavior. We further demonstrated that these effects depend on followers’ implicit theories of the malleability of morality. More important, we found that these effects were above and beyond the influences of general humility, ethical leadership, LMX, and ethical norms of conduct, providing support for the theoretical and practical importance of this new leadership construct. Our model is based on the general proposal that we need followers who believe in and leaders who model ongoing moral development. We hope that the current research inspires further exploration regarding how leaders and followers interact to shape and facilitate a more ethical workplace.

The article is here.

Tuesday, May 1, 2018

If we want moral AI, we need to teach it right from wrong

Emma Kendrew
Management Today
Originally posted April 3, 2018

Here is an excerpt:

Ethical constructs need to come before, not after, developing other skills. We teach children morality before maths. When they can be part of a social environment, we teach them language skills and reasoning. All of this happens before they enter a formal classroom.

Four out of five executives see AI working next to humans in their organisations as a co-worker within the next two years. It’s imperative that we learn to nurture AI to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.

AI Needs to Be Raised to Benefit Business and Society

AI is becoming smarter and more capable than ever before. With neural networks giving AI the ability to learn, the technology is evolving into an independent problem solver.

Consequently, we need to create learning-based AI that foster ethics and behave responsibly – imparting knowledge without bias, so that AI will be able to operate more effectively in the context of its situation. It will also be able to adapt to new requirements based on feedback from both its artificial and human peers. This feedback loop is an essential and also fundamental part of human learning.

The information is here.

Saturday, March 31, 2018

Individual Moral Development and Moral Progress

Schinkel, A. & de Ruyter, D.J.
Ethical Theory and Moral Practice (2017) 20: 121.


At first glance, one of the most obvious places to look for moral progress is in individuals, in particular in moral development from childhood to adulthood. In fact, that moral progress is possible is a foundational assumption of moral education. Beyond the general agreement that moral progress is not only possible but even a common feature of human development things become blurry, however. For what do we mean by ‘progress’? And what constitutes moral progress? Does the idea of individual moral progress presuppose a predetermined end or goal of moral education and development, or not? In this article we analyze the concept of moral progress to shed light on the psychology of moral development and vice versa; these analyses are found to be mutually supportive. We suggest that: moral progress should be conceived of as development that is evaluated positively on the basis of relatively stable moral criteria that are the fruit and the subject of an ongoing conversation; moral progress does not imply the idea of an end-state; individual moral progress is best conceived of as the development of various components of moral functioning and their robust integration in a person’s identity; both children and adults can progress morally - even though we would probably not speak in terms of progress in the case of children - but adults’ moral progress is both more hard-won and to a greater extent a personal project rather than a collective effort.

Download the paper here.

Monday, January 15, 2018

The media needs to do more to elevate a national conversation about ethics

Arthur Caplan
Originally December 21, 2017

Here is an excerpt:

Obviously unethical conduct has been around forever and will be into the foreseeable future. That said, it is important that the leaders of this nation and, more importantly, those leading our key institutions and professions reaffirm their commitment to the view that there are higher values worth pursuing in a just society. The fact that so many fail to live up to basic values does not mean that the values are meaningless, wrong or misplaced. They aren’t. It is rather that the organizations and professions where the epidemic of moral failure is burgeoning have put other values, often power and profits, ahead of morality.

There is no simple fix for hypocrisy. Egoism, the gross abuse of power and self-indulgence, is a very tough moral opponent in an individualistic society like America. Short-term reward is deceptively more attractive then slogging out the virtues in the name of the long haul. If we are to prepare our children to succeed, then attending to their moral development is as important as anything we can do. If our leaders are to truly lead then we have to reward those who do, not those who don’t, won’t or can’t. Are we?

The article is here.

Friday, October 21, 2016

When the Spirit Is Willing, but the Flesh Is Weak Developmental Differences in Judgments About Inner Moral Conflict

Christina Starmans & Paul Bloom
Psychological Science 
September 27, 2016


Sometimes it is easy to do the right thing. But often, people act morally only after overcoming competing immoral desires. How does learning about someone’s inner moral conflict influence children’s and adults’ moral judgments about that person? Across four studies, we discovered a striking developmental difference: When the outcome is held constant, 3- to 8-year-old children judge someone who does the right thing without experiencing immoral desires to be morally superior to someone who does the right thing through overcoming conflicting desires—but adults have the opposite intuition. This developmental difference also occurs for judgments of immoral actors: Three- to 5-year-olds again prefer the person who is not conflicted, whereas older children and adults judge that someone who struggles with the decision is morally superior. Our findings suggest that children may begin with the view that inner moral conflict is inherently negative, but, with development, come to value the exercise of willpower and self-control.

The article is here.

Thursday, October 13, 2016

The influence of intention, outcome and question-wording on children’s and adults’ moral judgments

Gavin Nobes, Georgia Panagiotaki, Kimberley J. Bartholomew
Volume 157, December 2016, Pages 190–204


The influence of intention and outcome information on moral judgments was investigated by telling children aged 4–8 years and adults (N = 169) stories involving accidental harms (positive intention, negative outcome) or attempted harms (negative intention, positive outcome) from two studies (Helwig, Zelazo, & Wilson, 2001; Zelazo, Helwig, & Lau, 1996). When the original acceptability (wrongness) question was asked, the original findings were closely replicated: children’s and adults’ acceptability judgments were based almost exclusively on outcome, and children’s punishment judgments were also primarily outcome-based. However, when this question was rephrased, 4–5-year-olds’ judgments were approximately equally influenced by intention and outcome, and from 5–6 years they were based considerably more on intention than outcome primarily intention-based. These findings indicate that, for methodological reasons, children’s (and adults’) ability to make intention-based judgment has often been substantially underestimated.

The article is here.

Monday, October 3, 2016

Moral learning: Why learning? Why moral? And why now?

Peter Railton


What is distinctive about a bringing a learning perspective to moral psychology? Part of the answer lies in the remarkable transformations that have taken place in learning theory over the past two decades, which have revealed how powerful experience-based learning can be in the acquisition of abstract causal and evaluative representations, including generative models capable of attuning perception, cognition, affect, and action to the physical and social environment. When conjoined with developments in neuroscience, these advances in learning theory permit a rethinking of fundamental questions about the acquisition of moral understanding and its role in the guidance of behavior. For example, recent research indicates that spatial learning and navigation involve the formation of non-perspectival as well as ego-centric models of the physical environment, and that spatial representations are combined with learned information about risk and reward to guide choice and potentiate further learning. Research on infants provides evidence that they form non-perspectival expected-value representations of agents and actions as well, which help them to navigate the human environment. Such representations can be formed by highly-general mental processes such as causal and empathic simulation, and thus afford a foundation for spontaneous moral learning and action that requires no innate moral faculty and can exhibit substantial autonomy with respect to community norms. If moral learning is indeed integral with the acquisition and updating of casual and evaluative models, this affords a new way of understanding well-known but seemingly puzzling patterns in intuitive moral judgment—including the notorious “trolley problems.”

The article is here.

Thursday, September 29, 2016

Priming Children’s Use of Intentions in Moral Judgement with Metacognitive Training

Gvozdic, Katarina and others
Frontiers in Psychology  
18 March 2016


Typically, adults give a primary role to the agent's intention to harm when performing a moral judgment of accidental harm. By contrast, children often focus on outcomes, underestimating the actor's mental states when judging someone for his action, and rely on what we suppose to be intuitive and emotional processes. The present study explored the processes involved in the development of the capacity to integrate agents' intentions into their moral judgment of accidental harm in 5 to 8-year-old children. This was done by the use of different metacognitive trainings reinforcing different abilities involved in moral judgments (mentalising abilities, executive abilities, or no reinforcement), similar to a paradigm previously used in the field of deductive logic. Children's moral judgments were gathered before and after the training with non-verbal cartoons depicting agents whose actions differed only based on their causal role or their intention to harm. We demonstrated that a metacognitive training could induce an important shift in children's moral abilities, showing that only children who were explicitly instructed to "not focus too much" on the consequences of accidental harm, preferentially weighted the agents' intentions in their moral judgments. Our findings confirm that children between the ages of 5 and 8 are sensitive to the intention of agents, however, at that age, this ability is insufficient in order to give a "mature" moral judgment. Our experiment is the first that suggests the critical role of inhibitory resources in processing accidental harm.

The article is here.

Thursday, June 2, 2016

Age-Related Differences in Moral Identity Across Adulthood

T. Krettenauer, L. A. Murua, & F. Jia
Developmental Psychology, Apr 28 , 2016


In this study, age-related differences in adults’ moral identity were investigated. Moral identity was conceptualized a context-dependent self-structure that becomes differentiated and (re)integrated in the course of development and that involves a broad range of value-orientations. Based on a cross-sectional sample of 252 participants aged 14 to 65 years (148 women, M = 33.5 years, SD = 16.9) and a modification of the Good Self-Assessment, it was demonstrated that mean-level of moral identity (averaged across the contexts of family, school/work, and community) significantly increased in the adult years, whereas cross-context differentiation showed a nonlinear trend peaking at the age of 25 years. Value-orientations that define individuals’ moral identity shifted so that self-direction and rule-conformity became more important with age. Age-related differences in moral identity were associated with, but not fully attributable to changes in personality traits. Overall, findings suggest that moral identity development is a lifelong process that starts in adolescence but expands well into middle age.

Here is an excerpt from the Discussion section:

The finding suggests that during adolescence and emerging adulthood individuals become more aware of changing moral priorities under varying circumstances. This process of differentiation is followed by the tendency to (re)integrate value priorities so that moral identities are not only defined by the self-importance of particular values, but by their consistent importance across different areas of life. This consistency may bolster individuals' sense of agency, as moral actions may be experienced as emanating from the self rather than from demand characteristics of external circumstances. Thus, the decline in cross-context differentiation in moral identities in adulthood may indicate that agentic desires become better integrated with morality, which has been described as an important goal of moral identity development by Frimer andWalker (2009).

The article is here.

Friday, January 8, 2016

Moral Reasoning and Personal Behavior: A Meta-Analytical Review

By Villegas de Posada, Cristina; Vargas-Trujillo, Elvia
Review of General Psychology, Vol 19(4), Dec 2015, 408-424.


The meta-analysis examined the effect of moral development on 4 domains of action (real life, honesty, altruism, and resistance to conformity), and on action in general. The database, comprised by 151 studies across 71 years, stemmed from a previous narrative synthesis conducted by Blasi (1980), updated with studies published up to 2013. Results showed that (a) moral development was significantly related to action in general and to each domain, (b) the effect sizes were similar for altruism, real life, and resistance to conformity, with coefficients higher than r = .20, (c) the effect size for honesty was lower than for the other 3 types of behaviors, and (d) demographic or methodological variables did not affect the association between moral development and action. Discussion centers on similarities among domains of action, perfect and imperfect duties, and the need for other constructs to account for moral action.

Here is an excerpt:

Morality is essential to social life, and moral decisions and actions are the expression of this morality. They are linked to our rational ability to judge and make decisions. Although this link may seem obvious to many psychologists, it has been denied by influential scholars in psychology and philosophy, who come from different streams of a noncognitive tradition. Moral reasoning has a consistent effect on action, across domains, age, sex, and methodological approaches, an effect that cannot be minimized. This effect, on the range of medium rather than low, indicates that the strategy of promoting moral reasoning to enhance morality is a sound strategy and a way to overcome immorality and moral indifference.

The article is here.

Wednesday, October 15, 2014

Friends or foes: Is empathy necessary for moral behavior?

Jean Decety and Jason M. Cowell
Perspectives on Psychological Science
2014, Vol. 9(5) 525 –537


In the past decade, a flurry of empirical and theoretical research on morality and empathy has taken place, and interest and usage in the media and the public arena have increased. At times, in both popular culture and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that although there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish among the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology as well as developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used and, perhaps, abandoning the muddy concept of empathy.


Is Empathy a Necessary Concept?

To wrap up on a provocative note, it may be advanta-geous in the future for scholars interested in the science of morality to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena and, as a result, yields confusion in both understanding and predictive ability. In both academic and applied domains—such medicine, ethics, law, and policy—empathy has become an enticing, but muddy, notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.

The entire article is here.

Wednesday, August 6, 2014

What a Plagiarizing 12-Year-Old Has in Common With a U.S. Senator

Parents beware: Children who don't take ownership for their mistakes may grow up to be adults who create public scandals.

By Jessica Lahey
The Atlantic
Originally posted July 24, 2014

Here is an excerpt:

When Lauren told NPR that she was the first to suggest that scientists look in rivers for evidence of lionfish, she was not being honest. Worst-case scenario, she knowingly told a lie, but even if she simply misspoke, she made a mistake. That’s what children do, and when they do, the adults in their lives are tasked with turning those mistakes into learning experiences. One can only hope that in a private conversation after that NPR interview, Lauren’s father had pointed out that, actually, the original idea for her “finding” had come from another scientist, one he’d known professionally, and that maybe they should mention Jud’s work in her next interview. However, as Lauren went on to perpetuate falsehoods in subsequent interviews, the adults in Lauren’s life seem to have fallen down on their job as teachers and role models.

The entire article is here.

Thursday, July 24, 2014

‘She’s not a slag because she only had sex once’: Sexual ethics in a London secondary school

By Sarah Winkler Reid
Journal of Moral Education
Volume 43, Issue 2, 2014
Special Issue: ‘The good child’: Anthropological perspectives on morality and childhood


The premature sexualisation of young people is a source of intense public anxiety, often framed as an unprecedented crisis. Concurrently, a critical scholarship highlights problematic assumptions underpinning this discourse, including a positioning of young people as morally compromised passive subjects, and a disconnect between the reductionist framework and the complexity of young peoples’ lived experiences. Drawing from ethnographic research in a London school, in this article I argue that by attending to the everyday lives of pupils, a more nuanced picture of moral and sexual change and continuity emerges. Using the framework of ‘ordinary ethics’, which identifies ethics as pervasive in speech and action, I demonstrate the multiple ways by which young people define and act according to what they consider sexually good and right. In this way the analytical focus is shifted from passivity to activity and we can appreciate how young people today are evincing a sexual ethics of force and efficacy.

The entire article is here.

Monday, July 7, 2014

Can Classic Moral Stories Promote Honesty in Children?

K. Lee, V. Talwar, A. McCarthy, I. Ross, A. Evans, C. Arruda. Can Classic Moral Stories Promote Honesty in Children? Psychological Science, 2014; DOI: 10.1177/0956797614536401


The classic moral stories have been used extensively to teach children about the consequences of lying and the virtue of honesty. Despite their widespread use, there is no evidence whether these stories actually promote honesty in children. This study compared the effectiveness of four classic moral stories in promoting honesty in 3- to 7-year-olds. Surprisingly, the stories of “Pinocchio” and “The Boy Who Cried Wolf” failed to reduce lying in children. In contrast, the apocryphal story of “George Washington and the Cherry Tree” significantly increased truth telling. Further results suggest that the reason for the difference in honesty-promoting effectiveness between the “George Washington” story and the other stories was that the former emphasizes the positive consequences of honesty, whereas the latter focus on the negative consequences of dishonesty. When the “George Washington” story was altered to focus on the negative consequences of dishonesty, it too failed to promote honesty in children.

The entire article is here.

A review of the article from ScienceDaily is here.

Friday, May 30, 2014

Now The Military Is Going To Build Robots That Have Morals

By Patrick Tucker
Defense One
Originally posted May 13, 2014

Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

The entire article is here.