Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, October 31, 2021

Silenced by Fear: The Nature, Sources, and Consequences of Fear at Work

Kish-Gephart, J. J. et al. (2009)
Research in Organizational Behavior, 29, 163-193. 


In every organization, individual members have the potential to speak up about important issues, but a growing body of research suggests that they often remain silent instead, out of fear of negative personal and professional consequences. In this chapter, we draw on research from disciplines ranging from evolutionary psychology to neuroscience, sociology, and anthropology to unpack fear as a discrete emotion and to elucidate its effects on workplace silence. In doing so, we move beyond prior descriptions and categorizations of what employees fear to present a deeper understanding of the nature of fear experiences, where such fears originate, and the different types of employee silence they motivate. Our aim is to introduce new directions for future research on silence as well as to encourage further attention to the powerful and pervasive role of fear across numerous areas of theory and research on organizational behavior.


Fear, a powerful and pervasive emotion, influences human perception, cognition, and behavior in ways and to an extent that we find underappreciated in much of the organizational literature. This chapter draws from a broad range of literatures, including evolutionary psychology, neuroscience, sociology, and anthropology, to provide a fuller understanding of how fear influences silence in organizations. Our intention is to provide a foundation to inform future theorizing and research on fear’s effects in the workplace, and to elucidate why people at work fear challenging authority and thus how fear inhibits speaking up with even routine problems or suggestions for improvement.

Our review of the literature on fear generated insights with the potential to extend theory on silence in several ways.  First, we proposed that silence should be differentiated based on the intensity of fear experienced and the time available for choosing a response. Both non-deliberative, low-road silence and conscious but schema-driven silence differ from descriptions in extant literature of defensive silence as intentional, reasoned and involving an expectancy-like mental calculus. Thus, our proposed typology (in Fig. 2) suggests the need for content-specific future theory and research. For example, the description of silence as the result of extended, conscious deliberation may fit choices about whistleblowing and major issue selling well, while not explaining how individuals decide to speak up or remain silent in more routine high fear intensity or high immediacy situations. We also theorized that as a natural outcome of humans’ innate tendency to avoid the unpleasant characteristics of fear, employees may develop a type of habituated silence behavior that is largely unrecognized by them.

We expanded understanding of the antecedents of workplace silence by explaining in detail how prior (individual and societal) experiences affect the perceptions, appraisals, and outcomes of fear-based silence. Noting that the fear of challenging authority has roots in the biological mechanisms developed to aid survival in early humans, we argued that this prepared fear is continually developed and reinforced through a lifetime of experiences across most social institutions (e.g., family, school, religion) that implicitly and explicitly convey messages about authority relationships.Over time, these direct and indirect learning experiences, coupled with the characteristics of an evolutionary-based fear module, become the memories and beliefs against which current stimuli in moments of possible voice are compared.

Finally, we proposed two factors to help explain why and how certain individuals speak up to authority despite experiencing some fear of doing so. Though the deck is clearly stacked in favor of fear and silence, anger as a biologically-based emotion and voice efficacy as a learned belief in one’s ability to successfully speak up in difficult voice situations may help employees prevail over fear – in part, through their influence on the control appraisals that are central to emotional experience.

Saturday, October 30, 2021

Psychological barriers to effective altruism: An evolutionary perspective

Bastian, J. & van Vugt, M.
Current Opinion in Psychology
Available online 17 September 2021


People usually engage in (or at least profess to engage in) altruistic acts to benefit others. Yet, they routinely fail to maximize how much good is achieved with their donated money and time. An accumulating body of research has uncovered various psychological factors that can explain why people’s altruism tends to be ineffective. These prior studies have mostly focused on proximate explanations (e.g., emotions, preferences, lay beliefs). Here, we adopt an evolutionary perspective and highlight how three fundamental motives—parochialism, status, and conformity—can explain many seemingly disparate failures to do good effectively. Our approach outlines ultimate explanations for ineffective altruism and we illustrate how fundamental motives can be leveraged to promote more effective giving.

Summary and Implications

Even though donors and charities often highlight their desire to make a difference in the lives of others, an accumulating body of research demonstrates that altruistic acts are surprisingly ineffective in maximizing others’ welfare. To explain ineffective altruism, previous investigations have largely focused on the role of emotions, beliefs, preferences, and other proximate causes. Here, we adopted an evolutionary perspective to understand why these proximate mechanisms evolved in the first place. We outlined how three fundamental motives that likely evolved because they helped solve key challenges in humans’ ancestral past—parochialism, status, and conformity—can create psychological barriers to effective giving. Our framework not only provides a parsimonious explanation for many proximate causes of ineffective giving, it also provides an ultimate explanation for why these mechanisms exist.

Although parochialism, status concerns, and conformity can explain many forms of ineffective giving, there are additional causes that we did not address here. For example, many people focus too much on overhead costs when deciding where to donate. Everyday altruism is multi-faceted: People donate to charity, volunteer, give in church, and engage in a various random acts of kindness. These diverse acts of altruism likely require diverse explanations and more research is needed to understand the relative importance of different psychological factors for explaining different forms of altruism. Moreover, of the three fundamental motives reviewed here, conformity to social norms has probably received the least attention when it comes to explaining ineffective altruism. While there is ample evidence showing that social norms affect the decision of whether and how much to donate, more research is needed to understand how social norms influence the decision of where to donate and how they can lead to ineffective giving.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021


This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.


In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Thursday, October 28, 2021

Vicarious Justifications for Prejudice in the Application of Democratic Values

White MH, Crandall CS, & Davis NT.
September 2021. 


Democratic values are widely-endorsed principles including commitments to protect individual
freedoms. Paradoxically, the widespread normativity of these ideas can be used to justify
prejudice. With two nationally representative U.S. samples, we find that prejudiced respondents
defend another’s prejudiced speech, using democratic values as justification. This vicarious
defense occurs primarily among those who share the prejudice, and only when the relevant
prejudice is expressed. Several different democratic values (e.g., due process, double jeopardy)
can serve as justifications—the issue is more about when something can be used as a justification
for prejudice and less about what can be used as one. Endorsing democratic values can be a
common rhetorical device to expand what is acceptable, and protect what is otherwise
unacceptable to express in public.


Theories of values conceptualize them as principles that guide behavior (Maio, 2010;
Schwartz, 2012). But they can be bent to the will of strongly-held attitudes like prejudice. Values
are not always guiding principles that transcend situations; they can be normatively-loaded
rhetorical devices used to contest what is and is not acceptable to express. Although democratic
values are held in high regard in the abstract, citizens may accept norm violations when political
outcomes suit them (Graham & Svolik, 2020). We find that individuals strategically apply these
values to situations when it justifies shared prejudices.

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021


Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 


I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Monday, October 25, 2021

Federal Reserve tightens ethics rules to ban active trading by senior officials

Brian Cheung
Yahoo Business News
Originally posted 21 OCT 21

The Federal Reserve on Thursday said it will tighten its ethics rules concerning personal finances among its most senior officials, the latest development in a trading scandal that has led to the resignation of two policymakers.

The central bank said it has introduced a “broad set of new rules” that restricts any active trading and prohibits the purchase of any individual securities (i.e. stocks, bonds, or derivatives). The new restrictions effectively only allow purchases of diversified investment vehicles like mutual funds.

If policymakers want to make any purchases or sales, they will be required to provide 45 days of advance notice and obtain prior approval for any purchases and sales. Those officials will also be required to hold onto those investments for at least one year, with no purchases or sales allowed during periods of “heightened financial market stress.”

Fed officials are still working on the details of what would define that level of stress, but said the market conditions of spring 2020 would have qualified.

The new rules will also increase the frequency of public disclosures from the reserve bank presidents, requiring monthly filings instead of the status quo of annual filings. Those at the Federal Reserve Board in Washington already were required to make monthly disclosures.

The restrictions apply to policymakers and senior staff at the Fed’s headquarters in Washington, as well as its 12 Federal Reserve Bank regional outposts. The new rules will be implemented “over the coming months.”

Fed officials said changes will likely require divestments from any existing holdings that do not meet the updated standards.

Sunday, October 24, 2021

Evaluating Tradeoffs between Autonomy and Wellbeing in Supported Decision Making

Veit, W., Earp, B.D., Browning, H., Savulescu, J.
American Journal of Bioethics 

A core challenge for contemporary bioethics is how to address the tension between respecting an individual’s autonomy and promoting their wellbeing when these ideals seem to come into conflict (Notini  et  al.  2020).  This  tension  is  often  reflected  in  discussions  of  the  ethical  status  of guardianship and other surrogate decision-making regimes for individuals with different kinds or degrees of cognitive ability and (hence) decision-making capacity (Earp and Grunt-Mejer 2021), specifically when these capacities are regarded as diminished or impaired along certain dimensions (or with respect to certain domains). The notion or practice of guardianship, wherein a guardian is legally appointed to make decisions on behalf of someone with different/diminished capacities, has been particularly controversial. For example, many people see guardianship as unjust, taking too  much  decisional  authority  away  from  the  person  under  the  guardian’s  care  (often  due  to prejudiced attitudes, as when people with certain disabilities are wrongly assumed to lack decision-making capacity); and as too rigid, for example, in making a blanket judgment about someone’s (lack of) capacity, thereby preventing them from making decisions even in areas where they have the requisite abilities (Glen 2015).

It is  against  this  backdrop that  Peterson,  Karlawish, and  Largent (2021) offer  a  useful philosophical framework for the notion of ‘supported decision-making’ as a compelling alternative for  individuals  with  ‘dynamic  impairments’  (i.e.,  non-static  or  domain-variant  perceived mpairments  in  decision-making  capacity).  In  a  similar spirit,  we  have  previously  argued  that bioethics would benefit from a more case-sensitive rather than a ‘one-size-fits-all’ approach when it comes to issues of cognitive diversity (Veit et al. 2020; Chapman and Veit 2020). We therefore agree with most of the authors’ defence of supported decision-making, as this approach allows for case- and context-sensitivity. We also agree with the authors that the categorical condemnation of guardianships  or  similar  arrangements  is  not  justified,  as  this  precludes  such  sensitivity.  For instance, as the authors note, if a patient is in a permanent unaware/unresponsive state – i.e., with no  current  or  foreseeable  decision-making  capacity  or  ability  to  exercise  autonomy  –  then  a guardianship-like regime may be the most appropriate means of promoting this person’s interests. A similar point can be made in relation to debates about intended human enhancement of embryos and children.  Although some critics  claim that  such interventions  violate the autonomy  of the enhanced person, proponents may argue that respect for autonomy and consent do not apply in certain cases, for example, when dealing with embryos (see Veit 2018); alternatively, they may argue that interventions to enhance the (future) autonomy of a currently pre-autonomous (or partially autonomous) being can be justified on an enhancement framework without falling prey to such objections (see Earp 2019, Maslen et al. 2014). 

Saturday, October 23, 2021

Decision fatigue: Why it’s so hard to make up your mind these days, and how to make it easier

Stacy Colino
The Washington Post
Originally posted 22 Sept 21

Here is an excerpt:

Decision fatigue is more than just a feeling; it stems in part from changes in brain function. Research using functional magnetic resonance imaging has shown that there’s a sweet spot for brain function when it comes to making choices: When people were asked to choose from sets of six, 12 or 24 items, activity was highest in the striatum and the anterior cingulate cortex — both of which coordinate various aspects of cognition, including decision-making and impulse control — when the people faced 12 choices, which was perceived as “the right amount.”

Decision fatigue may make it harder to exercise self-control when it comes to eating, drinking, exercising or shopping. “Depleted people become more passive, which becomes bad for their decision-making,” says Roy Baumeister, a professor of psychology at the University of Queensland in Australia and author of  “Willpower: Rediscovering the Greatest Human Strength.” “They can be more impulsive. They may feel emotions more strongly. And they’re more susceptible to bias and more likely to postpone decision-making.”

In laboratory studies, researchers asked people to choose from an array of consumer goods or college course options or to simply think about the same options without making choices. They found that the choice-makers later experienced reduced self-control, including less physical stamina, greater procrastination and lower performance on tasks involving math calculations; the choice-contemplators didn’t experience these depletions.

Having insufficient information about the choices at hand may influence people’s susceptibility to decision fatigue. Experiencing high levels of stress and general fatigue can, too, Bufka says. And if you believe that the choices you make say something about who you are as a person, that can ratchet up the pressure, increasing your chances of being vulnerable to decision fatigue.

The suggestions include:

1. Sleep well
2. Make some choice automatic
3. Enlist a choice advisor
4. Given expectations a reality check
5. Pace yourself
6. Pay attention to feelings

Friday, October 22, 2021

A Meta-Analytic Investigation of the Antecedents, Theoretical Correlates, and Consequences of Moral Disengagement at Work

Ogunfowora, B. T., et al. (2021)
The Journal of Applied Psychology
Advance online publication. 


Moral disengagement refers to a set of cognitive tactics people employ to sidestep moral self-regulatory processes that normally prevent wrongdoing. In this study, we present a comprehensive meta-analytic review of the nomological network of moral disengagement at work. First, we test its dispositional and contextual antecedents, theoretical correlates, and consequences, including ethics (workplace misconduct and organizational citizenship behaviors [OCBs]) and non-ethics outcomes (turnover intentions and task performance). Second, we examine Bandura's postulation that moral disengagement fosters misconduct by diminishing moral cognitions (moral awareness and moral judgment) and anticipatory moral self-condemning emotions (guilt). We also test a contrarian view that moral disengagement is limited in its capacity to effectively curtail moral emotions after wrongdoing. The results show that Honesty-Humility, guilt proneness, moral identity, trait empathy, conscientiousness, idealism, and relativism are key individual antecedents. Further, abusive supervision and perceived organizational politics are strong contextual enablers of moral disengagement, while ethical leadership and organizational justice are relatively weak deterrents. We also found that narcissism, Machiavellianism, psychopathy, and psychological entitlement are key theoretical correlates, although moral disengagement shows incremental validity over these "dark" traits. Next, moral disengagement was positively associated with workplace misconduct and turnover intentions, and negatively related to OCBs and task performance. Its positive impact on misconduct was mediated by lower moral awareness, moral judgment, and anticipated guilt. Interestingly, however, moral disengagement was positively related to guilt and shame post-misconduct. In sum, we find strong cumulative evidence for the pertinence of moral disengagement in the workplace.

From the Discussion

Our moderator analyses reveal several noteworthy findings. First, the relationship between moral disengagement and misconduct did not significantly differ depending on whether it is operationalized as a trait or state. This suggests that the impact of moral disengagement – at least with respect to workplace misconduct – is equally devastating when it is triggered in specific situations or when it is captured as a stable propensity. This provides initial support for conceptualizing moral disengagement along a continuum – from “one off” instances in specific contexts (i.e., state moral disengagement) to a “dynamic disposition” (Bandura, 1999b) that is relatively stable, but which may also shift in response to different situations (Moore et al., 2019).  

Second, there may be utility in exploring specific disengagement tactics. For instance, euphemistic labeling exerted stronger effects on misconduct compared to moral justification and diffusion of responsibility. Relative weight analyses further showed that some tactics contribute more to understanding misconduct and OCBs. Scholars have proposed that exploring moral disengagement tactics that match the specific context may offer new insights (Kish-Gephart et al., 2014; Moore et al., 2019). It is possible that moral justification might be critical in situations where participants must conjure up rationales to justify their misdeeds (Duffy et al., 2005), while diffusion of responsibility might matter more in team settings where morally disengaging employees can easily assign blame to the collective (Alnuaimi et al., 2010). These possibilities suggest that specific disengagement tactics may offer novel theoretical insights that may be overlooked when scholars focus on overall moral disengagement. However, we acknowledge that this conclusion is preliminary given the small number of studies available for these analyses. 

Thursday, October 21, 2021

How Disgust Affects Social Judgments

Inbar, Y., & Pizarro, D.
(2021, September 7). 


The emotion of disgust has been claimed to affect a diverse array of social judgments, including moral condemnation, inter-group prejudice, political ideology, and much more. We attempt to make sense of this large and varied literature by reviewing the theory and research on how and why disgust influences these judgments. We first describe two very different perspectives adopted by researchers on why disgust should affect social judgment. The first is the pathogen-avoidance account, which sees the relationship between disgust and judgment as resulting from disgust’s evolved function as a pathogen-avoidance mechanism. The second is the extended disgust account, which posits that disgust functions much more broadly to address a range of other threats and challenges. We then review the empirical evidence to assess how well it supports each of these perspectives, arguing that there is more support for the pathogen-avoidance account than the extended account. We conclude with some testable empirical predictions that can better distinguish between these two perspectives.


We have described two very different perspectives on disgust that posit very different explanations for its role in social judgments. In our view, the evidence currently supports the pathogen-avoidance account over the extended-disgust alternative, but the question is best settled by future research explicitly designed to differentiate the two perspectives.

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Tuesday, October 19, 2021

Why Empathy Is Not a Reliable Source of Information in Moral Decision Making

Decety, J. (2021).
Current Directions in Psychological Science. 


Although empathy drives prosocial behaviors, it is not always a reliable source of information in moral decision making. In this essay, I integrate evolutionary theory, behavioral economics, psychology, and social neuroscience to demonstrate why and how empathy is unconsciously and rapidly modulated by various social signals and situational factors. This theoretical framework explains why decision making that relies solely on empathy is not ideal and can, at times, erode ethical values. This perspective has social and societal implications and can be used to reduce cognitive biases and guide moral decisions.

From the Conclusion

Empathy can encourage overvaluing some people and ignoring others, and privileging one over many. Reasoning is therefore essential to filter and evaluate emotional responses that guide moral decisions. Understanding the ultimate causes and proximate mechanisms of empathy allows characterization of the kinds of signals that are prioritized and identification of situational factors that exacerbate empathic failure. Together, this knowledge is useful at a theoretical level, and additionally provides practical information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with current societal aspirations. This conceptual framework advances current understanding of the role of empathy in moral decision making and may inform efforts to correct personal biases. Becoming aware of one’s biases is not the most effective way to manage and mitigate them, but empathy is not something that can be ignored. It has an adaptive biological function, after all.

Monday, October 18, 2021

Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots

M. Asada
Philosophies 2019, 4(3), 38


In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed.


To tackle the issue of consciousness, this study attempted to represent it as a phenomenon of the developmental process of artificial empathy for pain and moral behavior generation. The conceptual model for the former is given by, while the latter is now a story of fantasy. If a robot is regarded as a moral being that is capable of exhibiting moral behavior with others, is it deserving of receiving moral behavior from them? If so, can we agree that such robots have conscious minds? This is an issue of ethics towards robots, and is also related to the legal system. Can we ask such robots to accept a sort of responsibility for any accident they commit? If so, how? These issues arise when we introduce robots who are qualified as a moral being with conscious minds into our society.

Before these issues can be considered, there are so many technical issues to address. Among them, the following should be addressed intensively.
  1. Associate the sensory discrimination of pain with the affective and motivational responses to pain (the construction of the pain matrix and memory dynamics).
  2. Recall the experience when a painful situation of others is observed.
  3. Generate appropriate behavior to reduce the pain.

Sunday, October 17, 2021

The Cognitive Science of Technology

D. Stout
Trends in Cognitive Sciences
Available online 4 August 2021


Technology is central to human life but hard to define and study. This review synthesizes advances in fields from anthropology to evolutionary biology and neuroscience to propose an interdisciplinary cognitive science of technology. The foundation of this effort is an evolutionarily motivated definition of technology that highlights three key features: material production, social collaboration, and cultural reproduction. This broad scope respects the complexity of the subject but poses a challenge for theoretical unification. Addressing this challenge requires a comparative approach to reduce the diversity of real-world technological cognition to a smaller number of recurring processes and relationships. To this end, a synthetic perceptual-motor hypothesis (PMH) for the evolutionary–developmental–cultural construction of technological cognition is advanced as an initial target for investigation.

  • Evolutionary theory and paleoanthropological/archaeological evidence motivate a theoretical definition of technology as socially reproduced and elaborated behavior involving the manipulation and modification of objects to enact changes in the physical environment.
  • This definition helps to resolve or obviate ongoing controversies in the anthropological, neuroscientific, and psychological literature relevant to technology.
  • A review of evidence from across these disciplines reveals that real-world technologies are diverse in detail but unified by the underlying demands and dynamics of material production. This creates opportunities for meaningful synthesis using a comparative method.
  • A ‘perceptual‐motor hypothesis’ proposes that technological cognition is constructed on biocultural evolutionary and developmental time scales from ancient primate systems for sensorimotor prediction and control.

Saturday, October 16, 2021

Social identity shapes antecedents and functional outcomes of moral emotion expression in online networks

Brady, W. J., & Van Bavel, J. J. 
(2021, April 2). 


As social interactions increasingly occur through social media platforms, intergroup affective phenomena such as “outrage firestorms” and “cancel culture” have emerged with notable consequences for society. In this research, we examine how social identity shapes the antecedents and functional outcomes of moral emotion expression online. Across four pre-registered experiments (N = 1,712), we find robust evidence that the inclusion of moral-emotional expressions in political messages has a causal influence on intentions to share the messages on social media. We find that individual differences in the strength of partisan identification is a consistent predictor of sharing messages with moral-emotional expressions, but little evidence that brief manipulations of identity salience increased sharing. Negative moral emotion expression in social media messages also causes the message author to be perceived as more strongly identified among their partisan ingroup, but less open-minded and less worthy of conversation to outgroup members. These experiments highlight the role of social identity in affective phenomena in the digital age, and showcase how moral emotion expressions in online networks can serve ingroup reputation functions while at the same time hinder discourse between political groups.


In the context of contentious political conversations online, moral-emotional language causes political partisans to share the message more often, and that this effect was strongest in strong group identifiers. Expressing negative moral-emotional language in social media messages makes the message author appear more strongly identified with their group, but also makes outgroup members think the author is less open-minded and less worth of conversation. This work sheds light on antecedents and functional outcomes of moral-emotion expression in the digital age, which is becoming increasingly important to study as intergroup affective phenomena such as viral outrage and affective polarization are reaching historic levels.

Friday, October 15, 2021

The Ethics of Sex Robots

Sterri, A. B., & Earp, B. D. (in press).
In C. Véliz (ed.), The Oxford Handbook of 
Digital Ethics. Oxford:  Oxford University Press.


What, if anything, is wrong with having sex with a robot? For the sake of this chapter, the authors  will  assume  that  sexbots  are  ‘mere’  machines  that are  reliably  identifiable  as such, despite  their  human-like  appearance  and  behaviour.  Under  these  stipulations,  sexbots themselves can no more be harmed, morally speaking, than your dishwasher. However, there may still be something wrong about the production, distribution,  and use of such sexbots. In this  chapter,  the  authors  examine  whether  sex  with robots  is  intrinsically  or  instrumentally wrong  and  critically  assess  different  regulatory  responses.  They  defend  a  harm  reduction approach to  sexbot  regulation,  analogous  to  the  approach that has  been  considered  in  other areas, concerning, for example, drugs and sex work.


Even  if  sexbots  never  become  sentient,  we  have  good  reasons  to  be  concerned with  their production, distribution, and use. Our seemingly  private activities have social meanings that we do not necessarily intend, but  which can be harmful to others. Sex  can both be  beautiful and  valuable—and  ugly  or  profoundly  harmful.  We  therefore  need  strong  ethical  norms  to guide human sexual behaviour, regardless of the existence of sexbots. Interaction with new technologies  could  plausibly  improve  our  sexual  relationships,  or  make things  worse  (see Nyholm et al. forthcoming, for a theoretical overview). In this chapter, we have explored some ways in which a harm reduction framework may have the potential to bring about the alleged benefits of sexbots with a minimum of associated harms. But whatever approach is taken, the goal should be to ensure that our relationships with robots conduce to, rather than detract from, the equitable flourishing of our fellow human beings.

Thursday, October 14, 2021

A Minimal Turing Test

McCoy, J. P., and Ullman, T.D.
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 1-8


We introduce the Minimal Turing Test, an experimental paradigm for studying perceptions and meta-perceptions of different social groups or kinds of agents, in which participants must use a single word to convince a judge of their identity. We illustrate the paradigm by having participants act as contestants or judges in a Minimal Turing Test in which contestants must convince a judge they are a human, rather than an artificial intelligence. We embed the production data from such a large-scale Minimal Turing Test in a semantic vector space, and construct an ordering over pairwise evaluations from judges. This allows us to identify the semantic structure in the words that people give, and to obtain quantitative measures of the importance that people place on different attributes. Ratings from independent coders of the production data provide additional evidence for the agency and experience dimensions discovered in previous work on mind perception. We use the theory of Rational Speech Acts as a framework for interpreting the behavior of contestants and judges in the Minimal Turing Test.

Wednesday, October 13, 2021

Supernatural Explanations Across the Globe Are More Common for Natural Than Social Phenomena

Jackson, J. C., et al.
(2021, September 2).


Supernatural beliefs are common in every human society, and people frequently invoke the supernatural to explain natural (e.g., storms, disease outbreaks) and social (e.g., murder, warfare) events. However, evolutionary and psychological theories of religion raise competing hypotheses about whether supernatural explanations should more commonly focus on natural or social phenomena. Here we test these hypotheses with a global analysis of supernatural explanations in 109 geographically and culturally diverse societies. We find that supernatural explanations are more prevalent for natural phenomena than for social phenomena, an effect that generalizes across regions and subsistence styles and cannot be reduced to the frequency of natural vs. social phenomena or common cultural ancestry. We also find that supernatural explanations of social phenomena only occur in societies that also have supernatural explanations of natural phenomena. This evidence is consistent with theories that ground the origin of supernatural belief in a human tendency to perceive intent and agency in nature.


Religious beliefs are prevalent in virtually every human society, and may even predate anatomically modern humans. The widespread prevalence of supernatural explanations suggests that explanation is a core property of religious beliefs, and humans may have long used religious beliefs to explain aspects of their natural and social worlds. However, there has never been a worldwide survey of supernatural explanations, which has been a barrier to understanding the most frequent ways that people use religious belief as a tool for explanation. 

We use a global analysis of societies in the ethnographic record to show that humans are more likely to use supernatural explanations to explain natural phenomena versus social phenomena. Across all world regions and subsistence styles, societies were more likely to attribute natural events like famine and disease to supernatural causes compared to social events such as warfare and murder. This prevalence gap could not be explained by the frequency of phenomena in our analysis (i.e., that disease outbreaks occurred more frequently than warfare).

Tuesday, October 12, 2021

Demand five precepts to aid social-media watchdogs

Ethan Zucker
Nature 597, 9 (2021)
Originally punished 31 Aug 21

Here is an excerpt:

I propose the following. First, give researchers access to the same targeting tools that platforms offer to advertisers and commercial partners. Second, for publicly viewable content, allow researchers to combine and share data sets by supplying keys to application programming interfaces. Third, explicitly allow users to donate data about their online behaviour for research, and make code used for such studies publicly reviewable for security flaws. Fourth, create safe-haven protections that recognize the public interest. Fifth, mandate regular audits of algorithms that moderate content and serve ads.

In the United States, the FTC could demand this access on behalf of consumers: it has broad powers to compel the release of data. In Europe, making such demands should be even more straightforward. The European Data Governance Act, proposed in November 2020, advances the concept of “data altruism” that allows users to donate their data, and the broader Digital Services Act includes a potential framework to implement protections for research in the public interest.

Technology companies argue that they must restrict data access because of the potential for harm, which also conveniently insulates them from criticism and scrutiny. They cite misuse of data, such as in the Cambridge Analytica scandal (which came to light in 2018 and prompted the FTC orders), in which an academic researcher took data from tens of millions of Facebook users collected through online ‘personality tests’ and gave it to a UK political consultancy that worked on behalf of Donald Trump and the Brexit campaign. Another example of abuse of data is the case of Clearview AI, which used scraping to produce a huge photographic database to allow federal and state law-enforcement agencies to identify individuals.

These incidents have led tech companies to design systems to prevent misuse — but such systems also prevent research necessary for oversight and scrutiny. To ensure that platforms act fairly and benefit society, there must be ways to protect user data and allow independent oversight.

Monday, October 11, 2021

Good deeds and hard knocks: The effect of past suffering on praise for moral behavior

P. Robbins, F. Alvera, & P. Litton
Journal of Experimental Social Psychology
Volume 97, November 2021


Are judgments of praise for moral behavior modulated by knowledge of an agent's past suffering at the hands of others, and if so, in what direction? Drawing on multiple lines of research in experimental social psychology, we identify three hypotheses about the psychology of praise — typecasting, handicapping, and non-historicism — each of which supports a different answer to the question above. Typecasting predicts that information about past suffering will augment perceived patiency and thereby diminish perceived agency, making altruistic actions seem less praiseworthy; handicapping predicts that this information will make altruistic actions seem more effortful, and hence more praiseworthy; and non-historicism predicts that judgments of praise will be insensitive to information about an agent's experiential history. We report the results of two studies suggesting that altruistic behavior tends to attract more praise when the experiential history of the agent involves coping with adversity in childhood rather than enjoying prosperity (Study 1, N = 348, p = .03, d = 0.45; Study 2, N = 400, p = .02, d = 0.39), as well as the results of a third study suggesting that altruistic behavior tends to be evaluated more favorably when the experiential history of the agent includes coping with adversity than in the absence of information about the agent's past experience (N = 226, p = .002). This pattern of results, we argue, is more consistent with handicapping than typecasting or non-historicism.

From the Discussion

One possibility is that a history of suffering is perceived as depleting the psychological resources required for acting morally, making it difficult for someone to shift attention from their own needs to the needs of others. This is suggested by the stereotype of people who have suffered hardships in early life, especially at the hands of caregivers, which includes a tendency to be socially anxious, insecure, and withdrawn — a stereotype which may have some basis in fact (Elliott, Cunningham, Linder, Colangelo, & Gross, 2005). A history of suffering, that is, might seem like an obstacle to developing the kind of social mindedness exemplified by acts of altruism and other forms of prosocial behavior, which are typically motivated by feelings of compassion or empathic concern. This is an open empirical question, worthy of investigation not just in connection with handicapping and typecasting (and historicist accounts of praise more generally) but in its own right.

This research may have implications for psychotherapy.

Sunday, October 10, 2021

Oppressive Double Binds

S. Hirji
Ethics, Vol 131, 4.
July 2021


I give an account of the structure of “oppressive double binds,” the double binds that exist in virtue of oppression. I explain how these double binds both are a product of and serve to reinforce oppressive structures. The central feature of double binds, I argue, is that an agent’s own prudential good is bound up with their ability to resist oppression; double binds are choice situations where no matter what an agent does, they become a mechanism in their own oppression. A consequence is that double binds constrain an individual’s agency while leaving various dimensions of their autonomy fully intact.

In the concluding remarks

To sum up: I have had three overarching goals of this article. The first has been to vindicate Frye’s point that once we properly understand the structure of double binds, we see how they differ from ordinary restrictions on an individual’s options and how they serve to immobilize and reduce members of certain groups. As Frye insists, understanding this difference between mechanisms of oppression and ordinary restrictions on our options is a crucial part of identifying and challenging oppressive structures. The second goal has been to develop and refine the concept of a double bind so that it can be useful in theorizing about oppression. I have argued that double binds are choice situations in which a member of an oppressed group is forced to choose between cooperating with and resisting some oppressive norm, and because of the way their own prudential good is bound up with their ability to resist oppression, they end up to some degree reinforcing their own oppression no matter what they do. The third goal has been to better understand what I call “imperfect choices”—choices where, no matter what an agent does, they undermine the very interest at stake in their choice. I have argued that “imperfect choices” constrain an individual’s agency while leaving various dimensions of their autonomy fully intact.

Saturday, October 9, 2021

Nudgeability: Mapping Conditions of Susceptibility to Nudge Influence

de Ridder, D., Kroese, F., & van Gestel, L. (2021). 
Perspectives on psychological science 
Advance online publication. 


Nudges are behavioral interventions to subtly steer citizens' choices toward "desirable" options. An important topic of debate concerns the legitimacy of nudging as a policy instrument, and there is a focus on issues relating to nudge transparency, the role of preexisting preferences people may have, and the premise that nudges primarily affect people when they are in "irrational" modes of thinking. Empirical insights into how these factors affect the extent to which people are susceptible to nudge influence (i.e., "nudgeable") are lacking in the debate. This article introduces the new concept of nudgeability and makes a first attempt to synthesize the evidence on when people are responsive to nudges. We find that nudge effects do not hinge on transparency or modes of thinking but that personal preferences moderate effects such that people cannot be nudged into something they do not want. We conclude that, in view of these findings, concerns about nudging legitimacy should be softened and that future research should attend to these and other conditions of nudgeability.

From the General Discussion

Finally, returning to the debates on nudging legitimacy that we addressed at the beginning of this article, it seems that concerns should be softened insofar as nudges do impose choice without respecting basic ethical requirements for good public policy. More than a decade ago, philosopher Luc Bovens (2009) formulated the following four principles for nudging to be legitimate: A nudge should allow people to act in line with their overall preferences; a nudge should not induce a change in preferences that would not hold under nonnudge conditions; a nudge should not lead to “infantilization,” such that people are no longer capable of making autonomous decisions; and a nudge should be transparent so that people have control over being in a nudge situation. With the findings from our review in mind, it seems that these legitimacy requirements are fulfilled. Nudges do allow people to act in line with their overall preferences, nudges allow for making autonomous decisions insofar as nudge effects do not depend on being in a System 1 mode of thinking, and making the nudge transparent does not compromise nudge effects.

Friday, October 8, 2021

Can induced reflection affect moral decision-making

Daniel Spears, et al. (2021) 
Philosophical Psychology, 34:1, 28-46, 
DOI: 10.1080/09515089.2020.1861234


Evidence about whether reflective thinking may be induced and whether it affects utilitarian choices is inconclusive. Research suggests that answering items correctly in the Cognitive Reflection Test (CRT) before responding to dilemmas may lead to more utilitarian decisions. However, it is unclear to what extent this effect is driven by the inhibition of intuitive wrong responses (reflection) versus the requirement to engage in deliberative processing. To clarify this issue, participants completed either the CRT or the Berlin Numeracy Test (BNT) – which does not require reflection – before responding to moral dilemmas. To distinguish between the potential effect of participants’ previous reflective traits and that of performing a task that can increase reflectivity, we manipulated whether participants received feedback for incorrect items. Findings revealed that both CRT and BNT scores predicted utilitarian decisions when feedback was not provided. Additionally, feedback enhanced performance for both tasks, although it only increased utilitarian decisions when it was linked to the BNT. Taken together, these results suggest that performance in a numeric task that requires deliberative thinking may predict utilitarian responses to moral dilemmas. The finding that feedback increased utilitarian decisions only in the case of BNT casts doubt upon the reflective-utilitarian link.

From the General Discussion

Our data, however, did not fully support these predictions. Although feedback resulted in more utilitarian responses to moral dilemmas, this effect was mostly attributable to feedback on the BNT.  The effect was  not  attributable to differences in baseline task-performance. Additionally, both CRT and BNT scores predicted utilitarian responses when feedback was not provided. That  performance in the CRT predicts  utilitarian decisions is in agreement with a previous study linking cognitive reflection to utilitarian choice (Paxton et al., 2012; but see Sirota, Kostovicova, Juanchich, & Dewberry, pre-print, for the absence of effect when using a verbal CRT without numeric component).

Thursday, October 7, 2021

Axiological futurism: The systematic study of the future of values

J. Danaher
Volume 132, September 2021


Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades.


• Outlines a new field of inquiry: axiological futurism.

• Defends the role of axiological futurism in understanding technology in society.

• Develops a set of methods for undertaking this inquiry into the axiological future.

• Presents a model for understanding the impact of AI, robotics and ICTs on human values.

From the Conclusion

In conclusion, axiological futurism is the systematic and explicit inquiry into the axiological possibility space for future human (and post-human) civilisations. Axiological futurism is necessary because, given the history of axiological change and variation, it is very unlikely that our current axiological systems will remain static and unchanging in the future. Axiological futurism is also important because it is complementary to other futurological inquiries. While it might initially seem that axiological futurism cannot be a systematic inquiry, this is not the case. Axiological futurism is an exercise in informed speculation.

Wednesday, October 6, 2021

Immoral actors’ meta-perceptions are accurate but overly positive

Lees, J. M., Young, L., & Waytz, A.
(2021, August 16).


We examine how actors think others perceive their immoral behavior (moral meta-perception) across a diverse set of real-world moral violations. Utilizing a novel methodology, we solicit written instances of actors’ immoral behavior (N_total=135), measure motives and meta-perceptions, then provide these accounts to separate samples of third-party observers (N_total=933), using US convenience and representative samples (N_actor-observer pairs=4,615). We find that immoral actors can accurately predict how they are perceived, how they are uniquely perceived relative to the average immoral actor, and how they are misperceived. Actors who are better at judging the motives of other immoral actors also have more accurate meta-perceptions. Yet accuracy is accompanied by two distinct biases: overestimating the positive perceptions others’ hold, and believing one’s motives are more clearly perceived than they are. These results contribute to a detailed account of the multiple components underlying both accuracy and bias in moral meta-perception.

From the General Discussion

These results collectively suggest that individuals who have engaged in immoral behavior can accurately forecast how others will react to their moral violations.  

Studies 1-4 also found similar evidence for accuracy in observers’ judgments of the unique motives of immoral actors, suggesting that individuals are able to successfully perspective-take with those who have committed moral violations. Observers higher in cognitive ability (Studies 2-3) and empathic concern (Studies 2-4) were consistently more accurate in these judgments, while observers higher in Machiavellianism (Studies 2-4) and the propensity to engage in unethical workplace behaviors (Studies 3-4) were consistently less accurate. This latter result suggests that more frequently engaging in immoral behavior does not grant one insight into the moral minds of others, and in fact is associated with less ability to understand the motives behind others’ immoral behavior.

Despite strong evidence for meta-accuracy (and observer accuracy) across studies, actors’ accuracy in judging how they would be perceived was accompanied by two judgment biases.  Studies 1-4 found evidence for a transparency bias among immoral actors (Gilovich et al., 1998), meaning that actors overestimated how accurately observers would perceive their self-reported moral motives. Similarly, in Study 4 an examination of actors’ meta-perception point estimates found evidence for a positivity bias. Actors systematically overestimate the positive attributions, and underestimate the negative attributions, made of them and their motives. In fact, the single meta-perception found to be the most inaccurate in its average point estimate was the meta-perception of harm caused, which was significantly underestimated.

Tuesday, October 5, 2021

Social Networking and Ethics

Vallor, Shannon
The Stanford Encyclopedia of Philosophy 
(Fall 2021 Edition), Edward N. Zalta (ed.)

Here is an excerpt:

Contemporary Ethical Concerns about Social Networking Services

While early SNS scholarship in the social and natural sciences tended to focus on SNS impact on users’ psychosocial markers of happiness, well-being, psychosocial adjustment, social capital, or feelings of life satisfaction, philosophical concerns about social networking and ethics have generally centered on topics less amenable to empirical measurement (e.g., privacy, identity, friendship, the good life and democratic freedom). More so than ‘social capital’ or feelings of ‘life satisfaction,’ these topics are closely tied to traditional concerns of ethical theory (e.g., virtues, rights, duties, motivations and consequences). These topics are also tightly linked to the novel features and distinctive functionalities of SNS, more so than some other issues of interest in computer and information ethics that relate to more general Internet functionalities (for example, issues of copyright and intellectual property).

Despite the methodological challenges of applying philosophical theory to rapidly shifting empirical patterns of SNS influence, philosophical explorations of the ethics of SNS have continued in recent years to move away from Borgmann and Dreyfus’ transcendental-existential concerns about the Internet, to the empirically-driven space of applied technology ethics. Research in this space explores three interlinked and loosely overlapping kinds of ethical phenomena:
  • direct ethical impacts of social networking activity itself (just or unjust, harmful or beneficial) on participants as well as third parties and institutions;
  • indirect ethical impacts on society of social networking activity, caused by the aggregate behavior of users, platform providers and/or their agents in complex interactions between these and other social actors and forces;
  • structural impacts of SNS on the ethical shape of society, especially those driven by the dominant surveillant and extractivist value orientations that sustain social networking platforms and culture.
Most research in the field, however, remains topic- and domain-driven—exploring a given potential harm or domain-specific ethical dilemma that arises from direct, indirect, or structural effects of SNS, or more often, in combination. Sections 3.1–3.5 outline the most widely discussed of contemporary SNS’ ethical challenges.

Monday, October 4, 2021

Reactance, morality, and disgust: the relationship between affective dispositions and compliance with official health recommendations during the COVID-19 pandemic

Díaz, R., & Cova, F. (2021). 
Cognition & emotion, 1–17. 


Emergency situations require individuals to make important changes in their behavior. In the case of the COVID-19 pandemic, official recommendations to avoid the spread of the virus include costly behaviors such as self-quarantining or drastically diminishing social contacts. Compliance (or lack thereof) with these recommendations is a controversial and divisive topic, and lay hypotheses abound regarding what underlies this divide. This paper investigates which psychological traits separate people who comply with official recommendations from those who don't. In four pre-registered studies on both U.S. and French samples, we found that individuals' self-reported compliance with official recommendations during the COVID-19 pandemic was partly driven by individual differences in moral values, disgust sensitivity, and psychological reactance. We discuss the limitations of our studies and suggest possible applications in the context of health communication.

From the General Discussion

However, results for semi-partial correlations paint a different   picture. First, perspective-taking is no longer a significant predictor of past compliance, but only of future compliance. Moreover, correlations coefficients for care values and perspective-taking were no longer the highest:correlations were in the same order of magnitude for care values than for pathogen disgust and psychological reactance, and quite low (<.10) for perspective-taking. This suggests  that, compared to the  effect of pathogen disgust  and  psychological  reactance,  the effect of care values and perspective-taking was for a great part explainable by other variables. On the contrary,  the overall effect of Pathogen Disgust seemed mostly unaffected by  the introduction of other variables, suggesting that its effect is not explained by these other variables.

The effect of perspective-taking on past and future compliance was particularly low for Study 2a, compared to Studies 1a and 1b. What could explain this difference? A first possible explanation is the nature of our sample: two US samples in Studies 1a and 1b, and a French sample  for  Study  2a.  However, it is not  clear why  this  should make a difference to the relationship between perspective-taking and compliance. A second explanation might be that Study  2a  included fewer predictors  than  Studies1a and 1b.  However,  this  seems  unlikely, because the zero-order correlations for perspective-taking were also smaller in Study 2 a third explanation might be timing: as mentioned earlier,Studies 1a and 1 were conducted in the middle of the first wave, while Study 2a was conducted between the first and second French waves, at a time where victims of COVID-19 were far fewer and less present and salient in medias. In absence of actual persons to take the perspective of, perspective-taking might have been less likely to motivate compliance.

Sunday, October 3, 2021

Prosocial behavior and altruism: A review of concepts and definitions

Pfattheicher, S., Nielsen, Y. A., & Thielmann, I. 
Current Opinion in Psychology
Available online 23 August 2021


The field of prosociality is flourishing, yet researchers disagree about how to define prosocial behavior and often neglect defining it altogether. In this review, we provide an overview about the breadth of definitions of prosocial behavior and the related concept of altruism. Common to almost all definitions is an emphasis on the promotion of welfare in agents other than the actor. However, definitions of the two concepts differ in terms of whether they emphasize intentions and motives, costs and benefits, and the societal context. In order to improve on the conceptual ambiguity surrounding the study of prosociality, we urge researchers to provide definitions, to use operationalizations that match their definitions, and to acknowledge the diversity of prosocial behavior.

Concluding remarks

Together with many other researchers, we share the excitement about the study of prosocial behavior. To more strongly connect (abstract) theory and (concrete) behavior we need to carefully define and operationalize our constructs. More conceptual work is needed to clearly distinguish prosocial behavior from altruism and other types of prosocial behavior (such as cooperation and helping), and we should take care to avoid using the terms interchangeably. We hope that the present paper will encourage scholars targeting prosocial behavior or altruism in their research to use definitions more often and mindfully—to further develop the exciting field of prosocial behavior.