Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, August 31, 2022

Narrative Capacity

Toomey, James, (August 31, 2021).
100 N.C. L. Rev. 1073
Available at SSRN: https://ssrn.com/abstract=3914839


The doctrine of capacity is a fundamental threshold to the protections of private law. The law only recognizes private decision-making—from exercising the right to transfer or bequeath property and entering into a contract to getting married or divorced—made with the level of cognitive functioning that the capacity doctrine demands. When the doctrine goes wrong, it denies individuals, particularly older adults, access to basic private-law rights on the one hand and ratifies decision-making that may tear apart families and tarnish legacies on the other.

The capacity doctrine in private law is built on a fundamental philosophical mismatch. It is grounded in a cognitive theory of personhood, and determines whether to recognize private decisions based on the cognitive abilities thought by philosophers to entitle persons in general to unique moral status. But to align with the purposes of the substantive doctrines of property and contract, private-law capacity should instead be grounded in a narrative theory of personal identity. Rather than asking whether a decision-maker is a person by measuring their cognitive abilities, the doctrine should ask whether they are the same person by looking to the story of their life.

This Article argues for a new doctrine of capacity under which the law would recognize personal decision-making if and only if it is linked by a coherent narrative structure to the story of the decision-maker’s life. Moreover, the Article offers a test for determining which decisions meet this criterion and explains how the doctrine would work in practice.


Scholars and courts have long recognized that the threshold doctrine of capacity in private law requires reform to meet the needs of our aging society.  What they have not clearly seen is the doctrine’s fundamental error—a philosophical misalignment between the legal test, based on the construct of personhood, and its purposes, which are concerned with personal identity. This Article has excavated this distinction. And it has articulated and evaluated an alternative.

We think of ourselves as stories and we make meaning of our lives through our stories. That is what is at stake in the doctrine of capacity—whether an individual may continue to write their story by making decisions and choices.  Concern for the stories of our lives should be a paramount guiding principle of the capacity doctrine. In short, courts should only intervene in our decision-making where the story we would tell with our choices ceases to be our story at all.

Tuesday, August 30, 2022

Free Ethics CE

I will support Beth Rom-Rymer's voting kickoff for APA President-Elect with a continuing education program!!

I will be presenting a free CE on September 15, 2022 on the first day of voting for APA President-Elect.  I will be encouraging participants to vote in the APA election, and put Beth in the #1 space.

This will be the first in a series of three free workshops to promote Beth's candidacy.

Here is the long link.

Violence against women at work

Adams-Prassl, A., Huttunen, K., Nix, E., 
& Zhang, N. (2022). University of Oxford.
Working Paper


Between-colleague conflicts are common. We link every police report in Finland to administrative data to identify assaults between colleagues, and economic outcomes for victims, perpetrators, and firms. We document large, persistent labor market impacts of between colleague violence on victims and perpetrators. Male perpetrators experience substantially weaker consequences after attacking women compared to men. Perpetrators’ economic power in male-female violence partly explains this asymmetry. Male-female violence causes a decline in women at the firm. There is no change in within-network hiring, ruling out supplyside explanations via "whisper networks". Only male-managed firms lose women. Female managers do one important thing differently: fire perpetrators.


Our results have a number of implications. First, female victims of workplace violence have few economic incentives to report violence at work. Even in the relatively severe cases reported to the police in our data, the male perpetrator experiences relatively small labor market costs for his actions. This is consistent with the vast under-reporting of workplace harassment and abuse suggested by survey data. A major, known problem in preventing harassment at work is that victims rarely report the problem to their employer (Magley, 2002). Women under-reporting harassment and violence at the hands of a colleague (and in particular one’s manager) is easily reconciled with the comparative lack of career consequences for perpetrators of male-female violence we have documented.

Second, given that under-reporting is common, we are likely only observing a small fraction of all cases of workplace violence. As described in Section 2, just 10% of physical assaults are reported to the police in Finland, with lower reporting rates for crimes considered less serious by the victim (EU Agency for Fundamental Rights, 2015; European Institute for Crime Prevention & Control, 2009). Conservatively, this implies that the incidence of workplace violence is at least 10 times larger than can be documented by police reports. At the same time, under-reporting and selective reporting is relevant for the external validity of our results. While we provide the first evidence of the causal impacts of workplace violence on perpetrators, victims, and the broader firm we can only do so for the (likely) more severe cases reported to police. We might not expect to see quite as large of impacts on victims, perpetrators, and the firm from less severe abuse by colleagues.

Monday, August 29, 2022

Debiasing System 1: Training favours logical over stereotypical intuiting

Boissin, E, Caparos, S., Voudouri, A, & DeNeys, W.
Judgment and Decision Making, Vol. 17, No. 4, 
July 2022, pp. 646–690


Whereas people’s reasoning is often biased by intuitive stereotypical associations, recent debiasing studies suggest that performance can be boosted by short training interventions that stress the underlying problem logic. The nature of this training effect remains unclear. Does training help participants correct erroneous stereotypical intuitions through deliberation? Or does it help them develop correct intuitions? We addressed this issue in four studies with base-rate neglect and conjunction fallacy problems. We used a two-response paradigm in which participants first gave an initial intuitive response, under time pressure and cognitive load, and then gave a final response after deliberation. Studies 1A and 2A showed that training boosted performance and did so as early as the intuitive stage. After training, most participants
solved the problems correctly from the outset and no longer needed to correct an initial incorrect answer through deliberation. Studies 1B and 2B indicated that this sound intuiting persisted over at least two months. The findings confirm that a short training can debias reasoning at an intuitive “System 1” stage and get reasoners to favour logical over stereotypical intuitions.

From the General Discussion

Traditionally, it is assumed in the literature that debiasing interventions work by boosting deliberation and get people to better correct erroneous intuitions (Lilienfeld et al., 2009; Milkman et al., 2009). However, in many daily life situations reasoners will simply not have the time (or resources) to engage in costly deliberation. Hence, if our interventionsonly taught participants to deliberate more, they would be less than optimal (Boissin et al., 2021). As with most educational settings, we ultimately do not only want people to correct erroneous intuitions but to avoid biased intuitions altogether (Evans, 2019; Milkman et al., 2009; Reyna et al., 2015; Stanovich, 2018). The present study indicates that debiasing interventions in which the problem logic is briefly explained have such potential.

To avoid misinterpretation, it is important to highlight that our training did not lead to transfer effects. The training should thus not be conceived as a panacea that magically tunes the whole System 1 in one single stop. The training results generalized to base-rate and conjunction tasks, with overall similar effects across the two types of tasks, showing that participants can be trained to intuit correctly with different types of reasoning problems.  However, training base-rates did not help to solve the conjunction fallacy or other unrelated problems, and vice versa. The training effects were task specific. Reasoners did not learn to intuit (or deliberate) better in general. They got better at the very specific problem they were trained at. This fits with the finding that existing debiasing or cognitive training programs are often task or domain specific (Lilienfeld et al., 2009; Sala & Gobet, 2019; but also see Morewedge et al., 2015; Trouche et al., 2014). Our key finding is that this task specific training can play at the intuitive level and is persistent. When we talk about “System 1 debiasing” it should be conceived at this task specific level.

Sunday, August 28, 2022

Dr. Oz Shouldn’t Be a Senator—or a Doctor

Timothy Caulfield
Scientific American
Originally posted 15 DEC 21

While holding a medical license, Mehmet Oz, widely known as Dr. Oz, has long pushed misleading, science-free and unproven alternative therapies such as homeopathy, as well as fad diets, detoxes and cleanses. Some of these things have been potentially harmful, including hydroxychloroquine, which he once touted would be beneficial in the treatment or prevention of COVID. This assertion has been thoroughly debunked.

He’s built a tremendous following around his lucrative but evidence-free advice. So, are we surprised that Oz is running as a Republican for the U.S. Senate in Pennsylvania? No, we are not. Misinformation-spouting celebrities seem to be a GOP favorite. This move is very on brand for both Oz and the Republican Party.

His candidacy is a reminder that tolerating and/or enabling celebrity pseudoscience (I’m thinking of you, Oprah Winfrey!) can have serious and enduring consequences. Much of Oz’s advice was bunk before the pandemic, it is bunk now, and there is no reason to assume it won’t be bunk after—even if he becomes Senator Oz. Indeed, as Senator Oz, it’s all but guaranteed he would bring pseudoscience to the table when crafting and voting on legislation that affects the health and welfare of Americans.

As viewed by someone who researches the spread of health misinformation, Oz’s candidacy remains deeply grating in that “of course he is” kind of way. But it is also an opportunity to highlight several realities about pseudoscience, celebrity physicians and the current regulatory environment that allows people like him to continue to call themselves doctor.

Before the pandemic I often heard people argue that the wellness woo coming from celebrities like Gwyneth Paltrow, Tom Brady and Oz was mostly harmless noise. If people want to waste their money on ridiculous vagina eggs, bogus diets or unproven alternative remedies, why should we care? Buyer beware, a fool and their money, a sucker is born every minute, etc., etc.

But we know, now more than ever, that pop culture can—for better or worse—have a significant impact on health beliefs and behaviors. Indeed, one need only consider the degree to which Jenny McCarthy gave life to the vile claim that autism is linked to vaccination. Celebrity figures like podcast host Joe Rogan and football player Aaron Rodgers have greatly added to the chaotic information regarding COVID-19 by magnifying unsupported claims.

Saturday, August 27, 2022

Counterfactuals and the logic of causal selection

Quillien, T., & Lucas, C. G. (2022, June 13)


Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g. the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather. . . )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions.

From the General Discussion

Judgments of causation are closely related to assignments of blame, praise, and moral responsibility.  For instance, when two cars crash at an intersection, we say that the accident was caused by the driver who went through a red light (not by the driver who went through a green light; Knobe and Fraser, 2008; Icard et al., 2017; Hitchcock and Knobe, 2009; Roxborough and Cumby, 2009; Alicke, 1992; Willemsen and Kirfel, 2019); and we also blame that driver for the accident. According to some theorists, the fact that we judge the norm-violator to be blameworthy or morally responsible explains why we judge that he was the cause of the accident. This might be because our motivation to blame distorts our causal judgment (Alicke et al., 2011), because our intuitive concept of causation is inherently normative (Sytsma, 2021), or because of pragmatics confounds in the experimental tasks that probe the effect of moral violations on causal judgment (Samland & Waldmann, 2016).

Under these accounts, the explanation for why moral considerations affect causal judgment should be completely different than the explanation for why other factors (e.g.,prior probabilities, what happened in the actual world, the causal structure of the situation) affect causal judgment. We favor a more parsimonious account: the counterfactual approach to causal judgment (of which our theory is one instantiation) provides a unifying explanation for the influence of both moral and non-moral considerations on causal judgment (Hitchcock & Knobe, 2009)16.

Finally, many formal theories of causal reasoning aim to model how people make causal inferences (e.g. Cheng, 1997; Griffiths & Tenenbaum, 2005; Lucas & Griffiths, 2010; Bramley et al., 2017; Jenkins & Ward, 1965). These theories are not concerned with the problem of causal selection, the focus of the present paper. It is in principle possible that people use the same algorithms they use for causal inference when they engage in causal selection, but in practice models of causal inference have not been able to predict how people select causes (see Quillien and Barlev, 2022; Morris et al., 2019).

Friday, August 26, 2022

The Selective Laziness of Reasoning

Trouche, E., Johansson, P., Hall, L., & Mercier, H. 
(2016). Cognitive science, 40(8), 2122–2136.


Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

From the Discussion

These experiments provide a very clear demonstration of the selective laziness of reasoning. When reasoning produces arguments, it mostly produces post-hoc justifications for intuitive answers, and it is not particularly critical of one’s arguments for invalid answers. By contrast, when reasoning evaluates the very same arguments as if they were someone else’s, it proves both critical and discriminating.

The present results are analogous to those observed in the belief bias literature (e.g., Evans et al., 1983). When participants evaluate an argument whose conclusion they agree with, they tend to be neither critical (they accept most arguments) nor discriminating(they are not much more likely to reject invalid than valid arguments). By contrast, when they evaluate argument whose conclusion they disagree with, they tend to be more critical (they reject more arguments) and more discriminating (they are much more likely to reject invalid than valid arguments). The similarity is easily explained by the fact that when reasoning produces arguments for one’s position, it is automatically in a situation in which it agrees with the argument’s conclusion.

Selective laziness can be interpreted in light of the argumentative theory of reasoning (Mercier & Sperber, 2011). This theory hypothesizes that reasoning is best employed in a dialogical context. In such contexts, opening a discussion with a relatively weak argument is often sensible: It saves the trouble of computing the best way to convince a specific audience, and if the argument proves unconvincing, its flaws can be addressed in the back and forth of argumentation. Indeed, the interlocutor typically provides counter-arguments that help the speaker refine her arguments inappropriate ways (for an extended argument, see Mercier, Bonnier, & Trouche, unpublished data). As a result, the laziness of argument production might not be a flaw but an adaptive feature of reasoning. By contrast, people should properly evaluate other people’s arguments, so as not to accept misleading information—hence the selectivity of reasoning’s laziness.

In short: We make better judges for others, and better defense attorneys for ourselves (paraphrasing an old saying).

Thursday, August 25, 2022

South Dakota Governor Kristi Noem may have "engaged in misconduct," ethics board says

CBS News
Originally posted 23 AUG 22

A South Dakota ethics board on Monday said it found sufficient information that Gov. Kristi Noem may have "engaged in misconduct" when she intervened in her daughter's application for a real estate appraiser license, and it referred a separate complaint over her state airplane use to the state's attorney general for investigation.

The three retired judges on the Government Accountability Board determined that "appropriate action" could be taken against Noem for her role in her daughter's appraiser licensure, though it didn't specify the action.

The board's moves potentially escalate the ramifications of investigations into Noem. The Republican governor faces reelection this year and has also positioned herself as an aspirant to the White House in 2024. She is under scrutiny from the board after Jason Ravnsborg, the state's former Republican attorney general, filed complaints that stemmed from media reports on Noem's actions in office. She has denied any wrongdoing.

After meeting in a closed-door session for one hour Monday, the board voted unanimously to invoke procedures that allow for a contested case hearing to give Noem a chance to publicly defend herself against allegations of "misconduct" related to "conflicts of interest" and "malfeasance." The board also dismissed Ravnsborg's allegations that Noem misused state funds in the episode.

However, the retired judges left it unclear how they will proceed. Lori Wilbur, the board chair, said the complaint was "partially dismissed and partially closed," but added that the complaint could be reopened. She declined to discuss what would cause the board to reopen the complaint.

Wednesday, August 24, 2022

Dual use of artifcial-intelligence-powered drug discovery

Urbina, F., Lentzos, F., Invernizzi, C. et al. 
Nat Mach Intell 4, 189–191 (2022). 

The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally.  The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies.  Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.

Risk of misuse

The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting.  Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.


Broader effects on society

There is a need for discussions across traditional boundaries and multiple disciplines to allow for a fresh look at AI for de novo design and related technologies from different perspectives and with a wide variety of mindsets. Here, we give some recommendations that we believe will reduce potential dual-use concerns for AI in drug discovery. Scientific conferences, such as the Society of Toxicology and American Chemical Society, should actively foster a dialogue among experts from industry, academia and policy making on the implications of our computational tools.

Tuesday, August 23, 2022

Tackling Implicit Bias in Health Care

J. A. Sabin
N Engl J Med 2022; 387:105-107
DOI: 10.1056/NEJMp2201180

Implicit and explicit biases are among many factors that contribute to disparities in health and health care. Explicit biases, the attitudes and assumptions that we acknowledge as part of our personal belief systems, can be assessed directly by means of self-report. Explicit, overtly racist, sexist, and homophobic attitudes often underpin discriminatory actions. Implicit biases, by contrast, are attitudes and beliefs about race, ethnicity, age, ability, gender, or other characteristics that operate outside our conscious awareness and can be measured only indirectly. Implicit biases surreptitiously influence judgment and can, without intent, contribute to discriminatory behavior. A person can hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs.

Moreover, our individual biases operate within larger social, cultural, and economic structures whose biased policies and practices perpetuate systemic racism, sexism, and other forms of discrimination. In medicine, bias-driven discriminatory practices and policies not only negatively affect patient care and the medical training environment, but also limit the diversity of the health care workforce, lead to inequitable distribution of research funding, and can hinder career advancement.

A review of studies involving physicians, nurses, and other medical professionals found that health care providers’ implicit racial bias is associated with diagnostic uncertainty and, for Black patients, negative ratings of their clinical interactions, less patient-centeredness, poor provider communication, undertreatment of pain, views of Black patients as less medically adherent than White patients, and other ill effects.1 These biases are learned from cultural exposure and internalized over time: in one study, 48.7% of U.S. medical students surveyed reported having been exposed to negative comments about Black patients by attending or resident physicians, and those students demonstrated significantly greater implicit racial bias in year 4 than they had in year 1.

A review of the literature on reducing implicit bias, which examined evidence on many approaches and strategies, revealed that methods such as exposure to counterstereotypical exemplars, recognizing and understanding others’ perspectives, and appeals to egalitarian values have not resulted in reduction of implicit biases.2 Indeed, no interventions for reducing implicit biases have been shown to have enduring effects. Therefore, it makes sense for health care organizations to forgo bias-reduction interventions and focus instead on eliminating discriminatory behavior and other harms caused by implicit bias.

Though pervasive, implicit bias is hidden and difficult to recognize, especially in oneself. It can be assumed that we all hold implicit biases, but both individual and organizational actions can combat the harms caused by these attitudes and beliefs. Awareness of bias is one step toward behavior change. There are various ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own mistaken assumptions, and critically reflecting on biased behavior that we engage in or experience. Gonzalez and colleagues offer 12 tips for teaching recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical reflection exercises, and engaging learners in skill-building exercises and activities in which they must embrace their discomfort.

Monday, August 22, 2022

Meta-Analysis of Inequality Aversion Estimates

Nunnari, S., & Pozzi, M. (2022).
SSRN Electronic Journal.


Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a large-scale interdisciplinary meta-analysis, to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported during the past couple of decades. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is between 1.8 and 2.1. We also document how reported estimates vary depending on the observable characteristics of the study design.


In this paper, we reported the results of a meta-analysis of empirical estimates of the inequality aversion coefficients in models of outcome-based other-regarding preferences `a la Fehr and Schmidt (1999). We conduct both a frequentist analysis (using a multi-level random-effects model) and a Bayesian analysis (using a Bayesian hierarchical model) to provide a “weighted average” for α and β. The results from the two approaches are nearly identical and support the hypothesis of inequality concerns. From the frequentist analysis, we learn that the mean envy coefficient is 0.425 with a 95% confidence interval of [0.244, 0.606]; the mean guilt coefficient is, instead, 0.291 with a 95% confidence interval [0.218, 0.363]. This means that, on average, an individual is willing to spend € 0.41 to increase others’ earnings by €1 when ahead, and € 0.74 to decrease others’ earnings by €1 when behind. The theoretical assumptions α ≥ β and 0 ≤ β < 1 are upheld in our empirical analysis, but we cannot conclude that the disadvantageous inequality coefficient is statistically greater than the coefficient for advantageous inequality. We also observe no correlation between the two parameters.

Sunday, August 21, 2022

Medial and orbital frontal cortex in decision-making and flexible behavior

Klein-Flügge, M. C., Bongioanni, A., & 
Rushworth, M. F. (2022).


The medial frontal cortex and adjacent orbitofrontal cortex have been the focus of investigations of decision-making, behavioral flexibility, and social behavior. We review studies conducted in humans, macaques, and rodents and argue that several regions with different functional roles can be identified in the dorsal anterior cingulate cortex, perigenual anterior cingulate cortex, anterior medial frontal cortex, ventromedial prefrontal cortex, and medial and lateral parts of the orbitofrontal cortex. There is increasing evidence that the manner in which these areas represent the value of the environment and specific choices is different from subcortical brain regions and more complex than previously thought. Although activity in some regions reflects distributions of reward and opportunities across the environment, in other cases, activity reflects the structural relationships between features of the environment that animals can use to infer what decision to take even if they have not encountered identical opportunities in the past.


Neural systems that represent the value of the environment exist in many vertebrates. An extended subcortical circuit spanning the striatum, midbrain, and brainstem nuclei of mammals corresponds to these ancient systems. In addition, however, mammals possess several frontal cortical regions concerned with guidance of decision-making and adaptive, flexible behavior. Although these frontal systems interact extensively with these subcortical circuits, they make specific contributions to behavior and also influence behavior via other cortical routes. Some areas such as the ACC, which is present in a broad range of mammals, represent the distribution of opportunities in an environment over space and time, whereas other brain regions such as amFC and dmPFC have roles in representing structural associations and causal links between environmental features, including aspects of the social environment (Figure 8). Although the origins of these areas and their functions are traceable to rodents, they are especially prominent in primates. They make it possible not just to select choices on the basis of past experience of identical situations, but to make inferences to guide decisions in new scenarios.

Saturday, August 20, 2022

Truth by Repetition … without repetition: Testing the effect of instructed repetition on truth judgments

Mattavelli, S., Corneille, O., & Unkelbach, C.
Journal of Experimental Psychology
Learning Memory and Cognition
June 2022


Past research indicates that people judge repeated statements as more true than new ones. An experiential consequence of repetition that may underly this “truth effect” is processing fluency: processing statements feels easier following their repetition. In three preregistered experiments (N=684), we examined the effect of merely instructed repetition (i.e., not experienced) on truth judgments. Experiments 1-2 instructed participants that some statements were present (vs. absent) in an exposure phase allegedly undergone by other individuals. We then asked them to rate such statements based on how they thought those individuals would have done. Overall, participants rated repeated statements as more true than new statements. The instruction-based repetition effects were significant but also significantly weaker than those elicited by the experience of repetition (Experiments 1 & 2). Additionally, Experiment 2 clarified that adding a repetition status tag in the experienced repetition condition did not impact truth judgments. Experiment 3 further showed that the instruction-based effect was still detectable when participants provided truth judgments for themselves rather than estimating other people’s judgments. We discuss the mechanisms that can explain these effects and their implications for advancing our understanding of the truth effect.

(Beginning of the) General Discussion 

Deciding whether information is true or false is a challenging task. Extensive research showed that one key variable that people often use to judge the truth of a statement is repetition (e.g., Hasher et al. 1977): repeated statements are judged more true than new ones (see Dechêne et al., 2010). Virtually all explanations of this truth effect refer to the processing consequences of repetition: higher recognition rates than new statements, higher familiarity, and higher fluency (see Unkelbach et al., 2019). However, in many communication situations, people get to know that a statement is repeated (e.g., it occurred frequently) without prior exposure to the statement. Here, we asked whether repetition can be used as a cue for truth without prior exposure, and thus, in the absence of experiential consequences of repetition such as fluency. 


This work represents the first attempt to assess the impact of instructed repetition on truth judgments. We found that the truth effect was stronger when repetition was experienced rather than merely instructed in three experiments. However, we provided initial evidence that a component of the effect is unrelated to the experience of repetition. A truth effect was still detectable in the absence of any internal cue (i.e., fluency) induced by the experienced repetition of the statement and, therefore, should be conditional upon learning history or naïve beliefs. This finding paves the way for new research avenues interested in isolating the unique contribution of known repetition and experienced fluency on truth judgments.

This research has multiple applications to psychotherapy, including how do patients know what information about self and others is true, and how much is due to repetition or internal cues, beliefs, or feelings.  Human beings are meaning makers, and try to assess how the world functions based on the meaning projected toward others.

Friday, August 19, 2022

Too cynical to reconnect: Cynicism moderates the effect of social exclusion on prosociality through empathy

B. K. C. Choy, K. Eom, & N. P. Li
Personality and Individual Differences
Volume 178, August 2021, 110871


Extant findings are mixed on whether social exclusion impacts prosociality. We propose one factor that may underlie the mixed results: Cynicism. Specifically, cynicism may moderate the exclusion-prosociality link by influencing interpersonal empathy. Compared to less cynical individuals, we expected highly cynical individuals who were excluded to experience less empathy and, consequently, less prosocial behavior. Using an online ball-tossing game, participants were randomly assigned to an exclusion or inclusion condition. Consistent with our predictions, the effect of social exclusion on prosociality through empathy was contingent on cynicism, such that only less-cynical individuals responded to exclusion with greater empathy, which, in turn, was associated with higher levels of prosocial behavior. We further showed this effect to hold for cynicism, but not other similar traits typically characterized by high disagreeableness. Findings contribute to the social exclusion literature by suggesting a key variable that may moderate social exclusion's impact on resultant empathy and prosocial behavior and are consistent with the perspective that people who are excluded try to not only become included again but to establish alliances characterized by reciprocity.

From the Discussion

While others have proposed that empathy may be reflexively inhibited upon exclusion (DeWall & Baumeister, 2006; Twenge et al., 2007), our findings indicate that this process of inhibition—at least for empathy—may be more flexible than previously thought. If reflexive, individuals would have shown a similar level of empathy regardless of cynicism. That highly- and less-cynical individuals displayed different levels of empathy indicates that some other processes are in play. Our interpretation is that the process through which empathy is exhibited or inhibited may depend on one’s appraisals of the physical and social situation. 

Importantly, unlike cynicism, other similarly disagreeable dispositional traits such as Machiavellianism, psychopathy, and SDO (Social Dominance Orientation) did not modulate the empathy-mediated link between social exclusion and prosociality. This suggests that cynicism is conceptually different from other traits of a seemingly negative nature. Indeed, whereas cynics may hold a negative view of the intentions of others around them, Machiavellians are characterized by a negative view of others’ competence and a pragmatic and strategic approach to social interactions (Jones, 2016). Similarly, whereas cynics view others’ emotions as ingenuine, psychopathic individuals are further distinguished by their high levels of callousness and impulsivity (Paulhus, 2014). Likewise, whereas cynics may view the world as inherently competitive, they may not display the same preference for hierarchy that high-SDO individuals do (Ho et al., 21015). Thus, despite the similarities between these traits, our findings affirm their substantive differences from cynicism. 

Thursday, August 18, 2022

Dunning–Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence

Pennycook, G., Ross, R.M., Koehler, D.J. et al. 
Psychon Bull Rev 24, 1774–1784 (2017). 


The Dunning–Kruger effect refers to the observation that the incompetent are often ill-suited to recognize their incompetence. Here we investigated potential Dunning–Kruger effects in high-level reasoning and, in particular, focused on the relative effectiveness of metacognitive monitoring among particularly biased reasoners. Participants who made the greatest numbers of errors on the cognitive reflection test (CRT) overestimated their performance on this test by a factor of more than 3. Overestimation decreased as CRT performance increased, and those who scored particularly high underestimated their performance. Evidence for this type of systematic miscalibration was also found on a self-report measure of analytic-thinking disposition. Namely, genuinely nonanalytic participants (on the basis of CRT performance) overreported their “need for cognition” (NC), indicating that they were dispositionally analytic when their objective performance indicated otherwise. Furthermore, estimated CRT performance was just as strong a predictor of NC as was actual CRT performance. Our results provide evidence for Dunning–Kruger effects both in estimated performance on the CRT and in self-reported analytic-thinking disposition. These findings indicate that part of the reason why people are biased is that they are either unaware of or indifferent to their own bias.

General discussion

Our results provide empirical support for Dunning–Kruger effects in both estimates of reasoning performance and self-reported thinking disposition. Particularly intuitive individuals greatly overestimated their performance on the CRT—a tendency that diminished and eventually reversed among increasingly analytic individuals. Moreover, self-reported analytic-thinking disposition—as measured by the Ability and Engagement subscales of the NC scale—was just as strongly (if not more strongly) correlated with estimated CRT performance than with actual CRT performance. In addition, an analysis using an additional performance-based measure of analytic thinking—the heuristics-and-biases battery—revealed a systematic miscalibration of self-reported NC, wherein relatively intuitive individuals report that they are more analytic than is justified by their objective performance. Together, these findings indicate that participants who are low in analytic thinking (so-called “intuitive thinkers”) are at least somewhat unaware of (or unresponsive to) their propensity to rely on intuition in lieu of analytic thought during decision making. This conclusion is consistent with previous research that has suggested that the propensity to think analytically facilitates metacognitive monitoring during reasoning (Pennycook et al., 2015b; Thompson & Johnson, 2014). Those who are genuinely analytic are aware of the strengths and weaknesses of their reasoning, whereas those who are genuinely nonanalytic are perhaps best described as “happy fools” (De Neys et al., 2013).

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 


Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Monday, August 15, 2022

Modular Morals: Mapping the organisation of the moral brain

Wilkinson, J. Curry, O.S., et al.
OSF Home
Last Updated: 2022-07-12


Is morality the product of multiple domain-specific psychological mechanisms, or one domain-general mechanism? Previous research suggests that morality consists of a range of solutions to the problems of cooperation recurrent in human social life. This theory of ‘morality as cooperation’ suggests that there are (at least) seven specific moral domains: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. However, it is unclear how these types of morality are implemented at the neuroanatomical level. The possibilities are that morality is (1) the product of multiple distinct domain-specific adaptations for cooperation, (2) the product of a single domain-general adaptation which learns a range of moral rules, or (3) the product of some combination of domain-specific and domain-general adaptations. To distinguish between these possibilities, we first conducted an anatomical likelihood estimation meta-analysis of previous studies investigating the relationship between these seven moral domains and neuroanatomy. This meta-analysis provided evidence for a combination of specific and general adaptations. Next, we investigated the relationship between the seven types of morality – as measured by the Morality as Cooperation Questionnaire (Relevance) – and grey matter volume in a large neuroimaging (n=607) sample. No associations between moral values and grey matter volume survived whole-brain exploratory testing. We conclude that whatever combination of mechanisms are responsible for morality, either they are not neuroanatomically localised, or else their localisation is not manifested in grey matter volume. Future research should employ phylogenetically informed a priori predictions, as well as alternative measures of morality and of brain function.

Sunday, August 14, 2022

Political conspiracy theories as tools for mobilization and signaling

Marie, A., & Petersen, M. B. (2022).
Current Opinion in Psychology, 101440


Political conspiracist communities emerge and bind around hard-to-falsify narratives about political opponents or elites convening to secretly exploit the public in contexts of perceived political conflict. While the narratives appear descriptive, we propose that their content as well as the cognitive systems regulating their endorsement and dissemination may have co-evolved, at least in part, to reach coalitional goals: To drive allies’ attention to the social threat to increase their commitment and coordination for collective action, and to signal devotion to gain within-group status. Those evolutionary social functions may be best fulfilled if individuals endorse the conspiratorial narrative sincerely.


•  Political conspiracist groups unite around clear-cut and hard-to-falsify narratives about political opponents or elites secretly organizing to deceive and exploit the public.

•  Such social threat-based narratives and the cognitive systems that regulate them may have co-evolved, at least in part, to serve social rather than epistemic functions: facilitating ingroup recruitment, coordination, and signaling for cooperative benefits.

•  While social in nature, those adaptive functions may be best fulfilled if group leaders and members endorse conspiratorial narratives sincerely.


Political conspiracy theories are cognitively attractive, hard-to-falsify narratives about the secret misdeeds of political opponents and elites. While descriptive in appearance, endorsement and expression of those narratives may be regulated, at least partly, by cognitive systems pursuing social goals: to attract attention of allies towards a social threat to enhance commitment and coordination for joint action (in particular, in conflict), and signal devotion to gain within-group status.

Rather than constituting a special category of cultural beliefs, we see political conspiracy theories as part of a wider family of abstract ideological narratives denouncing how an evil, villains, or oppressive system—more or less real and clearly delineated—exploit a virtuous victim group. This family also comprises anti-capitalistic vs. anti-communist or religious propaganda, white supremacist vs. anti-racist discourses, etc. Future research should explore the content properties that make those threat-based narratives compelling; the balance between their hypothetical social functions of signaling, commitment, and coordination enhancers; and the factors moderating their spread (such as intellectual humility and beliefs that the outgroup does not hate the ingroup).

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375


Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.


In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Friday, August 12, 2022

Cross-Cultural Differences and Similarities in Human Value Instantiation

Hanel PHP, Maio GR, et al. (2018).
Front. Psychol., 29 May 2018
Sec.Personality and Social Psychology


Previous research found that the within-country variability of human values (e.g., equality and helpfulness) clearly outweighs between-country variability. Across three countries (Brazil, India, and the United Kingdom), the present research tested in student samples whether between-nation differences reside more in the behaviors used to concretely instantiate (i.e., exemplify or understand) values than in their importance as abstract ideals. In Study 1 (N = 630), we found several meaningful between-country differences in the behaviors that were used to concretely instantiate values, alongside high within-country variability. In Study 2 (N = 677), we found that participants were able to match instantiations back to the values from which they were derived, even if the behavior instantiations were spontaneously produced only by participants from another country or were created by us. Together, these results support the hypothesis that people in different nations can differ in the behaviors that are seen as typical as instantiations of values, while holding similar ideas about the abstract meaning of the values and their importance.


Overall, Study 1 revealed that most examples that are spontaneously attached to values vary in how much they are shaped by context. In most cases, within-country variability outweighed between-country differences. Nevertheless, many of the instances for which between-country differences were found could be linked to contextual factors. In Study 2, we found that most instantiations that had been spontaneously produced by participants in another country could reliably be matched to the values that they exemplified. Taken together, our results further challenge “the prevailing conception of culture as shared meaning system” (Schwartz, 2014, p. 5), as long as culture is equated with country or nation: the within-country variability outweighs the between-country variability, similar to values on an abstract level (Fischer and Schwartz, 2011). In other words, people endorse the same values to a similar extent across countries and also instantiate them similarly. We hope this research helps to lay a foundation for future research examining these differences and their implications for intercultural understanding and communication.

Thursday, August 11, 2022

Can you really do more than what duty requires?

Roger Crisp
The New Statesman
Originally posted 8 JUN 22

Here is an excerpt:

Since supererogation involves the paradox of accepting moral duties that do not require one to do what is morally best, why do we continue to find the idea so compelling?

One reason might be that we think that without supererogation the dictates of morality would be unacceptably demanding. If each of us has a genuine duty to benefit others as much as we can, then, given the vast number of individuals in serious need, most of the better-off would be required to make major sacrifices to live a virtuous life. Supererogation puts a limit on such requirements.

The idea that we can go beyond our duty in a praiseworthy way may be attractive, then, because we need to balance morality with self-interest. Here we ought to remember that each of us reasonably attaches a certain amount of importance to how our own lives go. So, each of us has reason to advance our own happiness independent of our duty to benefit others (which is why we describe some cases of helping others as a “sacrifice”). The need to strike a balance between our moral duties and our self-interest may explain why the notion of supererogation is so appealing.

But this doesn’t get us out of Sidgwick’s paradox: anyone who knows the morally best thing to do, but consciously decides not to do it, seems morally “lazy”.

Given the current state of the world, this means that morality is much more demanding than we typically think. Many of us should be doing a great deal more to alleviate the suffering of others, and doing this may cost us not only resources, but to some extent our own happiness or well-being.

In making donations to help strangers, we must ask when our reasons to keeping resources for ourselves are outweighed by reasons of beneficence. Under a more demanding view of morality, I should donate the money I could use to upgrade my TV to a charity that can save someone’s sight. Similarly, if the billionaire class could eradicate world poverty by donating 50 per cent of their wealth to development agencies, then they should do so immediately.

This may sound austere to our contemporary ears, but the Ancient Greeks and their philosophers thought morality could be rather demanding, and yet they never even considered the idea that duty was something you could go beyond. According to them, there are right things to do, and we should do them, making us virtuous and praiseworthy. And if we don’t, we are acting wrongly, we deserve blame, and we should feel guilty and ashamed.

It’s plausible to think that, once our health and wealth have reached certain thresholds, the things that really matter for our well-being – friendship, family, meaningful activities, and so on – are largely independent of our financial position. So, making much bigger sacrifices than we currently do may not be nearly as difficult or demanding as we tend to think.

Editor's note: For psychologists, supererogatory actions may include political advocacy for greater access to care, pro bono treatment for underserved populations, and volunteering on state and national association committees.

Wednesday, August 10, 2022

Moral Expansiveness Around the World: The Role of Societal Factors Across 36 Countries

Kirkland, K., Crimston, C. R., et al. (2022).
Social Psychological and Personality Science.


What are the things that we think matter morally, and how do societal factors influence this? To date, research has explored several individual-level and historical factors that influence the size of our ‘moral circles.' There has, however, been less attention focused on which societal factors play a role. We present the first multi-national exploration of moral expansiveness—that is, the size of people’s moral circles across countries. We found low generalized trust, greater perceptions of a breakdown in the social fabric of society, and greater perceived economic inequality were associated with smaller moral circles. Generalized trust also helped explain the effects of perceived inequality on lower levels of moral inclusiveness. Other inequality indicators (i.e., Gini coefficients) were, however, unrelated to moral expansiveness. These findings suggest that societal factors, especially those associated with generalized trust, may influence the size of our moral circles.

From the Discussion section

We found a clear link between greater generalized trust and increased moral expansiveness within-countries. Although we cannot be certain of causality, it may be that since trust is the glue that binds relationships, generalized trust may therefore be a necessary ingredient before one can care for strangers and more distant entities. Furthermore, while perceptions of breakdown within leadership (i.e., that government is ineffective and illegitimate) was not predictive of the scope of moral expansiveness, greater perceptions of breakdown in social fabric (e.g., low trust and no shared moral standards) was linked to reduced MES scores. Together this suggests that the relationships between individuals in a society relate to the size of moral circles as opposed to perceptions of those in power.

Low generalized trust was found to mediate the relationship between a higher perceived wealth gap among the rich and the poor and reduced moral expansiveness both within- and between-countries. Prior research has established that high economic inequality is related to reduced generalized trust (Oishi et al., 2011; Uslaner & Brown, 2005; Wilkinson & Pickett, 2007). This is the first work to show it may also be related to how we construct our moral world. However, experimental evidence or support from longitudinal data is needed before we can be certain about directionality. In contrast, perceptions of the breakdown in social fabric did not mediate the relationship between a higher perceived wealth gap among the rich and the poor and reduced moral expansiveness. Although a breakdown in social fabric is characterized by lower generalized trust between citizens, the social fabric concept also encompasses the perception that a shared moral standard among people is lacking (Teymoori et al., 2017). It thus appears to be the specific element of trust, rather than a breakdown in the social fabric more broadly, that mediates the relationship between the perceived wealth gap and moral expansiveness. Although we found a similar mediation effect at both levels of analysis, there was a non-significant tendency for a higher estimate of the wealth gap between countries to be related to greater moral expansiveness.

Tuesday, August 9, 2022

You can handle the truth: Mispredicting the consequences of honest communication

Levine, E. E., & Cohen, T. R. (2018).
Journal of Experimental Psychology: General, 
147(9), 1400–1429. 


People highly value the moral principle of honesty, and yet, they often avoid being honest with others. One reason people may avoid being completely honest is that honesty frequently conflicts with kindness: candidly sharing one’s opinions and feelings can hurt others and create social tension. In the present research, we explore the actual and predicted consequences of communicating honestly during difficult conversations. We compare honest communication to kind communication as well as a neutral control condition by randomly assigning individuals to be honest, kind, or conscious of their communication in every conversation with every person in their life for three days. We find that people significantly mispredict the consequences of communicating honestly: the experience of being honest is far more pleasurable, leads to greater levels of social connection, and does less relational harm than individuals expect. We establish these effects across two field experiments and two prediction experiments and we document the robustness of our results in a subsequent laboratory experiment. We explore the underlying mechanisms by qualitatively coding participants’ reflections during and following our experiments. This research contributes to our understanding of affective forecasting processes and uncovers fundamental insights on how communication and moral values shape well-being.

From the Discussion section

Our findings make several important contributions to our understanding of morality, affective forecasting, and human communication. First, we provide insight into why people avoid being honest with others. Our results suggest that individuals’ aversion to honesty is driven by a forecasting failure: Individuals expect honesty to be less pleasant and less socially connecting than it is. Furthermore, our studies suggest this is driven by individuals’ misguided fear of social rejection. Whereas prior work on mispredictions of social interactions has primarily examined how individuals misunderstand others or their preferences for interaction, the present research examines how individuals misunderstand others’ reactions to honest disclosure of thoughts and feelings, and how this shapes social communication.

Second, this research documents the broader consequences of being honest. Individuals’ predictions that honest communication would be less enjoyable and socially connecting than kind communication or one’s baseline communication were generally wrong. In the field experiment (Study 1a), participants in the honesty condition either felt similar or higher levels of social connection relative to participants in the kindness and control conditions. Participants in the honesty condition also derived greater long-term hedonic well-being and greater relational improvements relative to participants in the control condition. Furthermore, participants in Study 2 reported increased meaning in their life one week after engaging in their brief, but intense, honest conversation. Scholars have long claimed that morality promotes well-being, but to our knowledge, this is the first research to document how enacting specific moral principles promote different types of well-being.

Taken together, these findings suggest that individuals’ avoidance of honesty may be a mistake. By avoiding honesty, individuals miss out on opportunities that they appreciate in the long-run, and that they would want to repeat. Individuals’ choices about how to behave – in this case, whether or not to communicate honestly – seem to be driven primarily by expectations of enjoyment, but appreciation for these behaviors is driven by the experience of meaning. We encourage future research to further examine how affective forecasting failures may prevent individuals from finding meaning in their lives.

See the link above to the research.

Monday, August 8, 2022

Why are people antiscience, and what can we do about it?

Phillipp-Muller, A, Lee, W.S., & Petty, R. E.
PNAS (2022). 
DOI: 10.1073/pnas.2120755119.


From vaccination refusal to climate change denial, antiscience views are threatening humanity. When different individuals are provided with the same piece of scientific evidence, why do some accept whereas others dismiss it? Building on various emerging data and models that have explored the psychology of being antiscience, we specify four core bases of key principles driving antiscience attitudes. These principles are grounded in decades of research on attitudes, persuasion, social influence, social identity, and information processing. They apply across diverse domains of antiscience phenomena. Specifically, antiscience attitudes are more likely to emerge when a scientific message comes from sources perceived as lacking credibility; when the recipients embrace the social membership or identity of groups with antiscience attitudes; when the scientific message itself contradicts what recipients consider true, favorable, valuable, or moral; or when there is a mismatch between the delivery of the scientific message and the epistemic style of the recipient. Politics triggers or amplifies many principles across all four bases, making it a particularly potent force in antiscience attitudes. Guided by the key principles, we describe evidence-based counteractive strategies for increasing public acceptance of science.

Concluding Remarks

By offering an inclusive framework of key principles underlying antiscience attitudes, we aim to advance theory and research on several fronts: Our framework highlights basic principles applicable to antiscience phenomena across multiple domains of science. It predicts situational and personal variables (e.g., moralization, attitude strength, and need for closure) that amplify people’s likelihood and intensity of being antiscience. It unpacks why politics is such a potent force with multiple aspects of influence on antiscience attitudes. And it suggests a range of counteractive strategies that target each of the four bases. Beyond explaining, predicting, and addressing antiscience views, our framework raises unresolved questions for future research.

With the prevalence of antiscience attitudes, scientists and science communicators face strong headwinds in gaining and sustaining public trust and in conveying scientific information in ways that will be accepted and integrated into public understanding. It is a multifaceted problem that ranges from erosions in the credibility of scientists to conflicts with the identities, beliefs, attitudes, values, morals, and epistemic styles of different portions of the population, exacerbated by the toxic ecosystem of the politics of our time. Scientific information can be difficult to swallow, and many individuals would sooner reject the evidence than accept information that suggests they might have been wrong. This inclination is wholly understandable, and scientists should be poised to empathize. After all, we are in the business of being proven wrong, but that must not stop us from helping people get things right.

Sunday, August 7, 2022

Communication Strategies for Moral Rebels: How to Talk About Change in Order to Inspire Self-Efficacy in Others

Brouwer, C., Bolderdijk, J.-W., Cornelissen, G., 
& Kurz, T. (2022). WIREs Climate Change, e781.


Current carbon-intensive lifestyles are unsustainable and drastic social changes are required to combat climate change. To achieve such change, moral rebels (i.e., individuals who deviate from current behavioral norms based on ethical considerations) may be crucial catalyzers. However, the current literature holds that moral rebels may do more harm than good. By deviating from what most people do, based on a moral concern, moral rebels pose a threat to the moral self-view of their observers who share but fail to uphold that concern. Those observers may realize that their behavior does not live up to their moral values, and feel morally inadequate as a result. Work on “do-gooder derogation” demonstrates that rebel-induced threat can elicit defensive reactance among observers, resulting in the rejection of moral rebels and their behavioral choices. Such findings suggest that advocates for social change should avoid triggering moral threat by, for example, presenting nonmoral justifications for their choices. We challenge this view by arguing that moral threat may be a necessary ingredient to achieve social change precisely because it triggers ethical dissonance. Thus, instead of avoiding moral justifications, it may be more effective to harness that threat. Ethical dissonance may offer the fuel needed for observers to engage in self-improvement after being exposed to moral rebels, provided that observers feel capable of changing. Whether or not observers feel capable of changing, however, depends on how rebels communicate their moral choices to others—how they talk about change.

From the Conclusion

The theories reviewed point to the crucial importance of people feeling confident about their capabilities to change when they are confronted with their own perceived shortcomings. That is, rebel-induced dissonance must be accompanied by perceived self-efficacy (i.e., the belief that one is capable of change). Thus, rather than avoiding presenting a threat to others' moral self-views by, for example, using morally neutral justifications, we proposed that moral rebels should harness that threat, provided they talk about change using words of encouragement that helps inspire perceived self-efficacy in others.

To that end, we recommended that moral rebels should ensure that observers can preserve their belief in being a good person, despite their moral hiccups, and not discourage them in their capabilities needed for self-improvement. They should make those observers become more aware that their habitual choices incidentally produce harmful outcomes, and avoid suggesting that morally suboptimal actions are the result of having bad intentions, for instance through signaling self-compassion. Second, moral rebels could inspire self-efficacy by focusing on the fact that one's abilities can be developed in the pursuit of self-improvement and are not fixed traits that render one either born to succeed or doomed to fail. Finally, it may be more fruitful to focus on the “baby steps” it takes to reach a higher self-defining goal by promoting maximal moral standards (e.g., praising the incremental changes to observers' behaviors), rather than promoting minimal moral standards (e.g., a requirement for observers to make radical lifestyle changes to gain any moral cache). In sum, these strategies are focused on avoiding observers lapsing into a debilitating state of harsh self-criticism and/or feeling overwhelmed by the required change, but instead making them believe they too have the capabilities required for self-improvement.