Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Motivation. Show all posts
Showing posts with label Motivation. Show all posts

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),


Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Thursday, February 15, 2024

The motivating effect of monetary over psychological incentives is stronger in WEIRD cultures

Medvedev, D., Davenport, D.et al.
Nat Hum Behav (2024).


Motivating effortful behaviour is a problem employers, governments and nonprofits face globally. However, most studies on motivation are done in Western, educated, industrialized, rich and democratic (WEIRD) cultures. We compared how hard people in six countries worked in response to monetary incentives versus psychological motivators, such as competing with or helping others. The advantage money had over psychological interventions was larger in the United States and the United Kingdom than in China, India, Mexico and South Africa (N = 8,133). In our last study, we randomly assigned cultural frames through language in bilingual Facebook users in India (N = 2,065). Money increased effort over a psychological treatment by 27% in Hindi and 52% in English. These findings contradict the standard economic intuition that people from poorer countries should be more driven by money. Instead, they suggest that the market mentality of exchanging time and effort for material benefits is most prominent in WEIRD cultures.

The article challenges the assumption that money universally motivates people more than other incentives. It finds that:
  • Monetary incentives were more effective than psychological interventions in WEIRD cultures (Western, Educated, Industrialized, Rich, and Democratic), like the US and UK. People in these cultures exerted more effort for money compared to social pressure or helping others.
  • In contrast, non-WEIRD cultures like China, India, Mexico, and South Africa showed a smaller advantage for money. In some cases, even social interventions like promoting cooperation were more effective than financial rewards.
  • Language can also influence the perceived value of money. In a study with bilingual Indians, those interacting in English (associated with WEIRD cultures) showed a stronger preference for money than those using Hindi.
  • These findings suggest that cultural differences play a significant role in how people respond to various motivational tools. Assuming money as the universal motivator, often based on studies conducted in WEIRD cultures, might be inaccurate and less effective in diverse settings.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).


Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Wednesday, August 30, 2023

Not all skepticism is “healthy” skepticism: Theorizing accuracy- and identity-motivated skepticism toward social media misinformation

Li, J. (2023). 
New Media & Society, 0(0). 


Fostering skepticism has been seen as key to addressing misinformation on social media. This article reveals that not all skepticism is “healthy” skepticism by theorizing, measuring, and testing the effects of two types of skepticism toward social media misinformation: accuracy- and identity-motivated skepticism. A two-wave panel survey experiment shows that when people’s skepticism toward social media misinformation is driven by accuracy motivations, they are less likely to believe in congruent misinformation later encountered. They also consume more mainstream media, which in turn reinforces accuracy-motivated skepticism. In contrast, when skepticism toward social media misinformation is driven by identity motivations, people not only fall for congruent misinformation later encountered, but also disregard platform interventions that flag a post as false. Moreover, they are more likely to see social media misinformation as favoring opponents and intentionally avoid news on social media, both of which form a vicious cycle of fueling more identity-motivated skepticism.


I have made the case that it is important to distinguish between accuracy-motivated skepticism and identity-motivated skepticism. They are empirically distinguishable constructs that cast opposing effects on outcomes important for a well-functioning democracy. Across the board, accuracy-motivated skepticism produces normatively desirable outcomes. Holding a higher level of accuracy-motivated skepticism makes people less likely to believe in congruent misinformation they encounter later, offering hope that partisan motivated reasoning can be attenuated. Accuracy-motivated skepticism toward social media misinformation also has a mutually reinforcing relationship with consuming news from mainstream media, which can serve to verify information on social media and produce potential learning effects.

In contrast, not all skepticism is “healthy” skepticism. Holding a higher level of identity-motivated skepticism not only increases people’s susceptibility to congruent misinformation they encounter later, but also renders content flagging by social media platforms less effective. This is worrisome as calls for skepticism and platform content moderation have been a crucial part of recently proposed solutions to misinformation. Further, identity-motivated skepticism reinforces perceived bias of misinformation and intentional avoidance of news on social media. These can form a vicious cycle of close-mindedness and politicization of misinformation.

This article advances previous understanding of skepticism by showing that beyond the amount of questioning (the tipping point between skepticism and cynicism), the type of underlying motivation matters for whether skepticism helps people become more informed. By bringing motivated reasoning and media skepticism into the same theoretical space, this article helps us make sense of the contradictory evidence on the utility of media skepticism. Skepticism in general should not be assumed to be “healthy” for democracy. When driven by identity motivations, skepticism toward social media misinformation is counterproductive for political learning; only when skepticism toward social media is driven by the accuracy motivations does it inoculate people against favorable falsehoods and encourage consumption of credible alternatives.

Here are some additional thoughts on the research:
  • The distinction between accuracy-motivated skepticism and identity-motivated skepticism is a useful one. It helps to explain why some people are more likely to believe in misinformation than others.
  • The findings of the studies suggest that interventions that promote accuracy-motivated skepticism could be effective in reducing the spread of misinformation on social media.
  • It is important to note that the research was conducted in the United States. It is possible that the findings would be different in other countries.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).


One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.


There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.

This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 


Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Tuesday, June 28, 2022

You Think Failure Is Hard? So Is Learning From It

Eskreis-Winkler, L., & Fishbach, A. (2022).
Perspectives on Psychological Science. 


Society celebrates failure as a teachable moment. But do people actually learn from failure? Although lay wisdom suggests people should, a review of the research suggests that this is hard. We present a unifying framework that points to emotional and cognitive barriers that make learning from failure difficult. Emotions undermine learning because people find failure ego-threatening. People tend to look away from failure and not pay attention to it to protect their egos. Cognitively, people also struggle because the information in failure is less direct than the information in success and thus harder to extract. Beyond identifying barriers, this framework suggests inroads by which barriers might be addressed. Finally, we explore implications. We outline what, exactly, people miss out on when they overlook the information in failure. We find that the information in failure is often high-quality information that can be used to predict success.


From a young age, we are told that there is information in failure, and we ought to learn from it. Yet, people struggle to see the information in failure. As a result, they struggle to learn.  We present a unifying framework that identifies the emotional and cognitive barriers that make it difficult for people to learn from failure.

Understanding these barriers is especially important when one considers the information in failure. The information in failure is both rich and unique—indeed it is often richer, more informative, and more useful than the information in success.

What to do in a world where the information in failure is rich, yet people struggle to see it? One recommendation is to explore the solutions that we propose here. Remove the ego from failure, shore up the ego so it can tolerate failure, and ease the cognitive burdens of learning from failure to promote it in practice and through culture. We believe such techniques are well worth understanding and investing in, since there is so much to learn from the information in failure when we see it.

Sunday, October 31, 2021

Silenced by Fear: The Nature, Sources, and Consequences of Fear at Work

Kish-Gephart, J. J. et al. (2009)
Research in Organizational Behavior, 29, 163-193. 


In every organization, individual members have the potential to speak up about important issues, but a growing body of research suggests that they often remain silent instead, out of fear of negative personal and professional consequences. In this chapter, we draw on research from disciplines ranging from evolutionary psychology to neuroscience, sociology, and anthropology to unpack fear as a discrete emotion and to elucidate its effects on workplace silence. In doing so, we move beyond prior descriptions and categorizations of what employees fear to present a deeper understanding of the nature of fear experiences, where such fears originate, and the different types of employee silence they motivate. Our aim is to introduce new directions for future research on silence as well as to encourage further attention to the powerful and pervasive role of fear across numerous areas of theory and research on organizational behavior.


Fear, a powerful and pervasive emotion, influences human perception, cognition, and behavior in ways and to an extent that we find underappreciated in much of the organizational literature. This chapter draws from a broad range of literatures, including evolutionary psychology, neuroscience, sociology, and anthropology, to provide a fuller understanding of how fear influences silence in organizations. Our intention is to provide a foundation to inform future theorizing and research on fear’s effects in the workplace, and to elucidate why people at work fear challenging authority and thus how fear inhibits speaking up with even routine problems or suggestions for improvement.

Our review of the literature on fear generated insights with the potential to extend theory on silence in several ways.  First, we proposed that silence should be differentiated based on the intensity of fear experienced and the time available for choosing a response. Both non-deliberative, low-road silence and conscious but schema-driven silence differ from descriptions in extant literature of defensive silence as intentional, reasoned and involving an expectancy-like mental calculus. Thus, our proposed typology (in Fig. 2) suggests the need for content-specific future theory and research. For example, the description of silence as the result of extended, conscious deliberation may fit choices about whistleblowing and major issue selling well, while not explaining how individuals decide to speak up or remain silent in more routine high fear intensity or high immediacy situations. We also theorized that as a natural outcome of humans’ innate tendency to avoid the unpleasant characteristics of fear, employees may develop a type of habituated silence behavior that is largely unrecognized by them.

We expanded understanding of the antecedents of workplace silence by explaining in detail how prior (individual and societal) experiences affect the perceptions, appraisals, and outcomes of fear-based silence. Noting that the fear of challenging authority has roots in the biological mechanisms developed to aid survival in early humans, we argued that this prepared fear is continually developed and reinforced through a lifetime of experiences across most social institutions (e.g., family, school, religion) that implicitly and explicitly convey messages about authority relationships.Over time, these direct and indirect learning experiences, coupled with the characteristics of an evolutionary-based fear module, become the memories and beliefs against which current stimuli in moments of possible voice are compared.

Finally, we proposed two factors to help explain why and how certain individuals speak up to authority despite experiencing some fear of doing so. Though the deck is clearly stacked in favor of fear and silence, anger as a biologically-based emotion and voice efficacy as a learned belief in one’s ability to successfully speak up in difficult voice situations may help employees prevail over fear – in part, through their influence on the control appraisals that are central to emotional experience.

Saturday, May 8, 2021

When does empathy feel good?

Ferguson, A. M., Cameron, D., & Inzlicht, M. 
(2021, March 12). 


Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant. Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant.

Tuesday, January 5, 2021

Psychological selfishness

Carlson, R. W.,  et al. (2020, October 29).


Selfishness is central to many theories of human morality, yet its psychological nature remains largely overlooked. Psychologists often rely on classical conceptions of selfishness from economics (i.e., rational self-interest) and philosophy (i.e. psychological egoism), but such characterizations offer limited insight into the richer, motivated nature of selfishness. To address this gap, we propose a novel framework in which selfishness is recast as a psychological construction. From this view, selfishness is perceived in ourselves and others when we detect a situation-specific desire to benefit oneself that disregards others’ desires and prevailing social expectations for the situation. We argue that detecting and deterring such psychological selfishness in both oneself and others is crucial in social life—facilitating the maintenance of social cohesion and close relationships. In addition, we show how utilizing this psychological framework offers a richer understanding of the nature of human social behavior. Delineating a psychological construct of selfishness can promote coherence in interdisciplinary research on selfishness, and provide insights for interventions to prevent or remediate negative effects of selfishness.


Selfishness is a widely invoked, yet poorly defined construct in psychology. Many empirical “observations” of selfishness consist of isolated behaviors or de-contextualized motives. Here, we argued that these behaviors and motives often do not capture a psychologically meaningfully form of selfishness, and we addressed this gap in the literature by offering a concrete definition and framework for studying selfishness.

Selfishness is a mentalistic concept. As such, adopting a psychological framework can deepen our understanding of its nature. In the proposed model, selfishness unfolds within rich social situations that elicit specific desires, expectations, and considerations of others. Moreover, detecting selfishness serves the overarching function of coordinating and encouraging cooperative social behavior. To detect selfishness is to perceive a desire to act in violation of salient social expectations, and an array of emotions and corrective actions tend to follow. 

Selfishness is also a morally-laden concept. In fact, it is one of the least likable qualities a person can possess (N. H. Anderson, 1968). As such, selfishness is a construct in need of proper criteria for being manipulated, measured, and applied to peoples’ actions and motives. Scientific views have long been thought to shape human norms and beliefs(Gergen, 1973; Miller, 1999).

Sunday, December 20, 2020

Choice blindness: Do you know yourself as well as you think?

David Edmonds
Originally published 3 Oct 20

Here is an excerpt:

Clearly we lack self-knowledge about our motives and choices. But so what? What are the implications of this research?

Well, perhaps one general point is that we should learn to be more tolerant of people who change their minds. We tend to have very sensitive antennae for inconsistency - be this inconsistency in a partner, who's changed their mind on whether they fancy an Italian or an Indian meal, or a politician who's backed one policy in the past and now supports an opposing position. But as we often don't have a clear insight into why we choose what we choose, we should surely be given some latitude to switch our choices.

There may also be more specific implications for how we navigate through our current era - a period in which there is growing cultural and political polarisation. It would be natural to believe that those who support a left-wing or right-wing party do so because they're committed to that party's ideology: they believe in free markets or, the opposite, in a larger role for the state. But Petter Johansson's work suggests that our deeper commitment is not to particular policies, since, using his switching technique, we can be persuaded to endorse all sorts of policies. Rather, "we support a label or a team".

That is to say, we're liable to overestimate the extent to which a Trump supporter - or a Biden supporter - backs his or her candidate because of the policies the politician promotes. Instead, someone will be Team Trump, or Team Biden. A striking example of this was in the last US election. Republicans have traditionally been pro-free trade - but when Trump began to advocate protectionist policies, most Republicans carried on backing him, without even seeming to notice the shift.

Wednesday, December 9, 2020

An evolutionary explanation for ineffective altruism

Burum, B., Nowak, M.A. & Hoffman, M. 
Nat Hum Behav (2020). 


We donate billions to charities each year, yet much of our giving is ineffective. Why are we motivated to give but not to give effectively? Building on evolutionary game theory, we argue that donors evolved (genetically or culturally) to be insensitive to efficacy because people tend not to reward efficacy, as social rewards tend to depend on well-defined and highly observable behaviours. We present five experiments testing key predictions of this account that are difficult to reconcile with alternative accounts based on cognitive or emotional limitations. Namely, we show that donors are more sensitive to efficacy when helping themselves or their families. Moreover, social rewarders don’t condition on efficacy or other difficult-to-observe behaviours, such as the amount donated.

From the Conclusion

This paper has argued that altruism in a behavioural sense is an act that benefits another person, while it is altruistically motivated when the ultimate goal of such act is the welfare of that other. In evolutionary sense, altruism means the sacrifice of fitness for the benefit of other organisms. 

According to the evolutionary theories of altruism, behaviour which promotes the reproductive success of the receiver at the cost of the altruist is favoured by natural selection, because it is either beneficial for the altruist in the long run, or for his genes, or for the group he belongs to. Thus, in line with Trivers, it can be argued that “models that attempt to explain altruistic behaviour in terms of natural selection are models designed to take the altruism out of altruism” (Trivers 1971: 35).

Tuesday, December 8, 2020

Strategic Regulation of Empathy.

Weisz, E., & Cikara, M. (2020, October 9).


Empathy is an integral part of socio-emotional well-being, yet recent research has highlighted some of its downsides. Here we examine literature that establishes when, how much, and what aspects of empathy promote specific outcomes. After reviewing a theoretical framework which characterizes empathy as a suite of separable components, we examine evidence showing how dissociations of these components affect important socio-emotional outcomes and describe emerging evidence suggesting that these components can be independently and deliberately modulated. Finally, we advocate for a new approach to a multi-component view of empathy which accounts for the interrelations among components. This perspective advances scientific conceptualization of empathy and offers suggestions for tailoring empathy to help people realize their social, emotional, and occupational goals.

From the Conclusion

The goal of this review has been to evaluate the burgeoning literature on how components of empathy—in isolation or in concert—differentially affect key outcomes including prosocial behavior, relationship quality, occupational burnout, and negotiation. As such, an important takeaway from this review is that components of empathy can be leveraged to facilitate attainment of important goals. A second takeaway is that in order to effectively intervene on empathy in service of promoting specific outcomes, it is important to understand how these components track together (or not) in people’s everyday experiences. Relatedly, the field of empathy would benefit from thoroughly characterizing the structural and temporal relationships among these components to better understand how they work together (or in isolation) to drive key outcomes. 

Thus it seems that the time is right for the field of empathy research to enter anew wave, which explicitly examines the spontaneous separation or co-occurrence of dissociable empathy-related components, especially in behavioral—both laboratory and field—experiments. Several social neuroscience studies have indicated that this is an important aspect of empathy-related inquiry; as such, it is a promising next step for empathy-related research in more naturalistic contexts. The next wave of empathy research is in position to make incredibly important discoveries about when and for whom specific empathic components reliably predict behavioral outcomes, and to understand how empathy can be regulated to help people realize critical social, emotional and occupational goals.

Saturday, July 11, 2020

Why Do People Avoid Facts That Could Help Them?

Francesca Gino
Scientific American
Originally posted 16 June 20

In our information age, an unprecedented amount of data are right at our fingertips. We run genetic tests on our unborn children to prepare for the worst. We get regular cancer screenings and monitor our health on our wrist and our phone. And we can learn about our ancestral ties and genetic predispositions with a simple swab of saliva.

Yet there’s some information that many of us do not want to know. A study of more than 2,000 people in Germany and Spain by Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin and Rocio Garcia-Retamero of the University of Granada in Spain found that 90 percent of them would not want to find out, if they could, when their partner would die or what the cause would be. And 87 percent also reported not wanting to be aware of the date of their own death. When asked if they’d want to know if, and when, they’d get divorced, more than 86 percent said no.

Related research points to a similar conclusion: We often prefer to avoid learning information that could cause us pain. Investors are less likely to log on to their stock portfolios on days when the market is down. And one laboratory experiment found that subjects who were informed that they were rated less attractive than other participants were willing to pay money not to find out their exact rank.

More consequentially, people avoid learning certain information related to their health even if having such knowledge would allow them to identify therapies to manage their symptoms or treatment. As one study found, only 7 percent of people at high risk for Huntington’s disease elect to find out whether they have the condition, despite the availability of a genetic test that is generally paid for by health insurance plans and the clear usefulness of the information for alleviating the chronic disease’s symptoms. Similarly,participants in a laboratory experiment chose to forgo part of their earnings to avoid learning the outcome of a test for a treatable sexually transmitted disease. Such avoidance was even greater when the disease symptoms were more severe.

The info is here.

Saturday, May 2, 2020

Decision-Making Competence: More Than Intelligence?

Bruine de Bruin, W., Parker, A. M., & Fischhoff, B.
(2020). Current Directions in Psychological Science.


Decision-making competence refers to the ability to make better decisions, as defined by decision-making principles posited by models of rational choice. Historically, psychological research on decision-making has examined how well people follow these principles under carefully manipulated experimental conditions. When individual differences received attention, researchers often assumed that individuals with higher fluid intelligence would perform better. Here, we describe the development and validation of individual-differences measures of decision-making competence. Emerging findings suggest that decision-making competence may tap not only into fluid intelligence but also into motivation, emotion regulation, and experience (or crystallized intelligence). Although fluid intelligence tends to decline with age, older adults may be able to maintain decision-making competence by leveraging age-related improvements in these other skills. We discuss implications for interventions and future research.


Implications for Interventions

Better understanding of how fluid intelligence and other skills support decision-making competence should facilitate the design of interventions. Below, we briefly consider directions for future research into potential cognitive, motivational, emotional, and experiential interventions for promoting decision-making competence.

In one intervention that aimed to provide cognitive support, Zwilling and colleagues (2019) found that training in core cognitive abilities improved decision-making competence, compared to an active control group (in which participants practiced to process visual information faster.) Effects of cognitive training can be enhanced by high-intensity cardioresistance fitness training, which improves connectivity in the brain (Zwilling et al., 2019).  Rosi, Vecchi, & Cavallini (2019) found that prompting older people to ask ‘metacognitive’ questions (e.g., what is the main information?) was more effective than general memory training for improving performance on Applying Decision Rules. This finding is in line with suggestions that older adults perform better when they are asked to explain their choices (Kim, Goldstein, Hasher, & Zachs, 2005). Additional intervention approaches have aimed to reduce the need to rely on fluid intelligence. Using simple instead of complex decision rules may decrease cognitive demands, and cause fewer errors (Payne et al., 1993). Reducing the number of options also reduces cognitive demands, and may help especially older adults to improve their choices (Tanius, Wood, Hanoch, & Rice, 2009).

Tuesday, March 3, 2020

It Pays to Be Yourself

Francesca Gino
Originally posted 13 Feb 20

Whether it’s trying to land a new job or a new deal or client, we often focus on making a good initial impression on people, especially when they don’t know us well or the stakes are high. One strategy people often use is to cater to the interests, preferences, and expectations of the person they want to impress. Most people, it seems, believe this is a more promising strategy than being themselves and use it in high-stakes interpersonal first meetings. But research I conducted with Ovul Sezer of the University of Carolina at Chapel Hill and Laura Huang of Harvard Business School found those beliefs are wrong.

Our research confirmed that catering to others’ interests and expectations is quite common. When we asked over 450 employed adults to imagine they were about to have an important professional interaction — such as interviewing for their dream job, conducting a valuable negotiation for their company, pitching an entrepreneurial idea to potential investors, or making a presentation to a client — 66% of them indicated they would use catering techniques, rather than simply being themselves; 71% reported believing that catering would be the most effective approach in the situation.

But another study we conducted found that catering was much less effective than being yourself. We asked 166 entrepreneurs to participate in a “fast-pitch” competition held at a private university in the northeastern United States. Each entrepreneur presented his or her venture idea to a panel of three judges: experienced, active members of angel investment groups. The ideas pitched were all in the early stages; none had received any external financing. At the end of the event, the judges collectively deliberated to choose 10 semifinalists who would be invited to participate in the final round. After entrepreneurs made their pitches, we had them answer a few questions about their presentations. We found that when they were genuine in their pitches, they were more than three times as likely to be chosen as semifinalists than when they tried to cater to the judges.

The info is here.

Monday, January 27, 2020

The Character of Causation: Investigating the Impact of Character, Knowledge, and Desire on Causal Attributions

Justin Sytsma
(2019) Preprint


There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.

From the Conclusion:

Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.

 In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.

The research is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Saturday, November 16, 2019

Moral grandstanding in public discourse: Status-seeking motives as a potential explanatory mechanism in predicting conflict

Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK
(2019) PLoS ONE 14(10): e0223749.


Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.


Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links various domains of psychology with moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Within the present work, we focused on the motivation to engage in MG. Specifically, MG Motivation is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization. These results were consistently replicated in samples of U.S. undergraduates and nationally representative cross-sectional of U.S. residents, and longitudinal studies of adults in the U.S. Collectively, these results suggest that MG Motivation is a useful psychological phenomenon that has potential to aid our understanding of the intraindividual mechanisms driving caustic public discourse.

Friday, July 26, 2019

Dark Pathways to Achievement in Science: Researchers’ Achievement Goals Predict Engagement in Questionable Research Practices

Janke, S., Daumiller, M., & Rudert, S. C. (2019).
Social Psychological and Personality Science, 10(6), 783–791.


Questionable research practices (QRPs) are a strongly debated topic in the scientific community. Hypotheses about the relationship between individual differences and QRPs are plentiful but have rarely been empirically tested. Here, we investigate whether researchers’ personal motivation (expressed by achievement goals) is associated with self-reported engagement in QRPs within a sample of 217 psychology researchers. Appearance approach goals (striving for skill demonstration) positively predicted engagement in QRPs, while learning approach goals (striving for skill development) were a negative predictor. These effects remained stable when also considering Machiavellianism, narcissism, and psychopathy in a latent multiple regression model. Additional moderation analyses revealed that the more researchers favored publishing over scientific rigor, the stronger the association between appearance approach goals and engagement in QRPs. The findings deliver first insights into the nature of the relationship between personal motivation and scientific malpractice.

The research can be found here.