Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, August 20, 2022

Truth by Repetition … without repetition: Testing the effect of instructed repetition on truth judgments

Mattavelli, S., Corneille, O., & Unkelbach, C.
Journal of Experimental Psychology
Learning Memory and Cognition
June 2022

Abstract

Past research indicates that people judge repeated statements as more true than new ones. An experiential consequence of repetition that may underly this “truth effect” is processing fluency: processing statements feels easier following their repetition. In three preregistered experiments (N=684), we examined the effect of merely instructed repetition (i.e., not experienced) on truth judgments. Experiments 1-2 instructed participants that some statements were present (vs. absent) in an exposure phase allegedly undergone by other individuals. We then asked them to rate such statements based on how they thought those individuals would have done. Overall, participants rated repeated statements as more true than new statements. The instruction-based repetition effects were significant but also significantly weaker than those elicited by the experience of repetition (Experiments 1 & 2). Additionally, Experiment 2 clarified that adding a repetition status tag in the experienced repetition condition did not impact truth judgments. Experiment 3 further showed that the instruction-based effect was still detectable when participants provided truth judgments for themselves rather than estimating other people’s judgments. We discuss the mechanisms that can explain these effects and their implications for advancing our understanding of the truth effect.

(Beginning of the) General Discussion 

Deciding whether information is true or false is a challenging task. Extensive research showed that one key variable that people often use to judge the truth of a statement is repetition (e.g., Hasher et al. 1977): repeated statements are judged more true than new ones (see Dechêne et al., 2010). Virtually all explanations of this truth effect refer to the processing consequences of repetition: higher recognition rates than new statements, higher familiarity, and higher fluency (see Unkelbach et al., 2019). However, in many communication situations, people get to know that a statement is repeated (e.g., it occurred frequently) without prior exposure to the statement. Here, we asked whether repetition can be used as a cue for truth without prior exposure, and thus, in the absence of experiential consequences of repetition such as fluency. 

Conclusion 

This work represents the first attempt to assess the impact of instructed repetition on truth judgments. We found that the truth effect was stronger when repetition was experienced rather than merely instructed in three experiments. However, we provided initial evidence that a component of the effect is unrelated to the experience of repetition. A truth effect was still detectable in the absence of any internal cue (i.e., fluency) induced by the experienced repetition of the statement and, therefore, should be conditional upon learning history or naïve beliefs. This finding paves the way for new research avenues interested in isolating the unique contribution of known repetition and experienced fluency on truth judgments.


This research has multiple applications to psychotherapy, including how do patients know what information about self and others is true, and how much is due to repetition or internal cues, beliefs, or feelings.  Human beings are meaning makers, and try to assess how the world functions based on the meaning projected toward others.

Friday, August 19, 2022

Too cynical to reconnect: Cynicism moderates the effect of social exclusion on prosociality through empathy

B. K. C. Choy, K. Eom, & N. P. Li
Personality and Individual Differences
Volume 178, August 2021, 110871

Abstract

Extant findings are mixed on whether social exclusion impacts prosociality. We propose one factor that may underlie the mixed results: Cynicism. Specifically, cynicism may moderate the exclusion-prosociality link by influencing interpersonal empathy. Compared to less cynical individuals, we expected highly cynical individuals who were excluded to experience less empathy and, consequently, less prosocial behavior. Using an online ball-tossing game, participants were randomly assigned to an exclusion or inclusion condition. Consistent with our predictions, the effect of social exclusion on prosociality through empathy was contingent on cynicism, such that only less-cynical individuals responded to exclusion with greater empathy, which, in turn, was associated with higher levels of prosocial behavior. We further showed this effect to hold for cynicism, but not other similar traits typically characterized by high disagreeableness. Findings contribute to the social exclusion literature by suggesting a key variable that may moderate social exclusion's impact on resultant empathy and prosocial behavior and are consistent with the perspective that people who are excluded try to not only become included again but to establish alliances characterized by reciprocity.

From the Discussion

While others have proposed that empathy may be reflexively inhibited upon exclusion (DeWall & Baumeister, 2006; Twenge et al., 2007), our findings indicate that this process of inhibition—at least for empathy—may be more flexible than previously thought. If reflexive, individuals would have shown a similar level of empathy regardless of cynicism. That highly- and less-cynical individuals displayed different levels of empathy indicates that some other processes are in play. Our interpretation is that the process through which empathy is exhibited or inhibited may depend on one’s appraisals of the physical and social situation. 

Importantly, unlike cynicism, other similarly disagreeable dispositional traits such as Machiavellianism, psychopathy, and SDO (Social Dominance Orientation) did not modulate the empathy-mediated link between social exclusion and prosociality. This suggests that cynicism is conceptually different from other traits of a seemingly negative nature. Indeed, whereas cynics may hold a negative view of the intentions of others around them, Machiavellians are characterized by a negative view of others’ competence and a pragmatic and strategic approach to social interactions (Jones, 2016). Similarly, whereas cynics view others’ emotions as ingenuine, psychopathic individuals are further distinguished by their high levels of callousness and impulsivity (Paulhus, 2014). Likewise, whereas cynics may view the world as inherently competitive, they may not display the same preference for hierarchy that high-SDO individuals do (Ho et al., 21015). Thus, despite the similarities between these traits, our findings affirm their substantive differences from cynicism. 

Thursday, August 18, 2022

Dunning–Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence

Pennycook, G., Ross, R.M., Koehler, D.J. et al. 
Psychon Bull Rev 24, 1774–1784 (2017). 
https://doi.org/10.3758/s13423-017-1242-7

Abstract

The Dunning–Kruger effect refers to the observation that the incompetent are often ill-suited to recognize their incompetence. Here we investigated potential Dunning–Kruger effects in high-level reasoning and, in particular, focused on the relative effectiveness of metacognitive monitoring among particularly biased reasoners. Participants who made the greatest numbers of errors on the cognitive reflection test (CRT) overestimated their performance on this test by a factor of more than 3. Overestimation decreased as CRT performance increased, and those who scored particularly high underestimated their performance. Evidence for this type of systematic miscalibration was also found on a self-report measure of analytic-thinking disposition. Namely, genuinely nonanalytic participants (on the basis of CRT performance) overreported their “need for cognition” (NC), indicating that they were dispositionally analytic when their objective performance indicated otherwise. Furthermore, estimated CRT performance was just as strong a predictor of NC as was actual CRT performance. Our results provide evidence for Dunning–Kruger effects both in estimated performance on the CRT and in self-reported analytic-thinking disposition. These findings indicate that part of the reason why people are biased is that they are either unaware of or indifferent to their own bias.

General discussion

Our results provide empirical support for Dunning–Kruger effects in both estimates of reasoning performance and self-reported thinking disposition. Particularly intuitive individuals greatly overestimated their performance on the CRT—a tendency that diminished and eventually reversed among increasingly analytic individuals. Moreover, self-reported analytic-thinking disposition—as measured by the Ability and Engagement subscales of the NC scale—was just as strongly (if not more strongly) correlated with estimated CRT performance than with actual CRT performance. In addition, an analysis using an additional performance-based measure of analytic thinking—the heuristics-and-biases battery—revealed a systematic miscalibration of self-reported NC, wherein relatively intuitive individuals report that they are more analytic than is justified by their objective performance. Together, these findings indicate that participants who are low in analytic thinking (so-called “intuitive thinkers”) are at least somewhat unaware of (or unresponsive to) their propensity to rely on intuition in lieu of analytic thought during decision making. This conclusion is consistent with previous research that has suggested that the propensity to think analytically facilitates metacognitive monitoring during reasoning (Pennycook et al., 2015b; Thompson & Johnson, 2014). Those who are genuinely analytic are aware of the strengths and weaknesses of their reasoning, whereas those who are genuinely nonanalytic are perhaps best described as “happy fools” (De Neys et al., 2013).

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 
https://doi.org/10.31234/osf.io/hsjta

Abstract

Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Monday, August 15, 2022

Modular Morals: Mapping the organisation of the moral brain

Wilkinson, J. Curry, O.S., et al.
OSF Home
Last Updated: 2022-07-12

Abstract

Is morality the product of multiple domain-specific psychological mechanisms, or one domain-general mechanism? Previous research suggests that morality consists of a range of solutions to the problems of cooperation recurrent in human social life. This theory of ‘morality as cooperation’ suggests that there are (at least) seven specific moral domains: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. However, it is unclear how these types of morality are implemented at the neuroanatomical level. The possibilities are that morality is (1) the product of multiple distinct domain-specific adaptations for cooperation, (2) the product of a single domain-general adaptation which learns a range of moral rules, or (3) the product of some combination of domain-specific and domain-general adaptations. To distinguish between these possibilities, we first conducted an anatomical likelihood estimation meta-analysis of previous studies investigating the relationship between these seven moral domains and neuroanatomy. This meta-analysis provided evidence for a combination of specific and general adaptations. Next, we investigated the relationship between the seven types of morality – as measured by the Morality as Cooperation Questionnaire (Relevance) – and grey matter volume in a large neuroimaging (n=607) sample. No associations between moral values and grey matter volume survived whole-brain exploratory testing. We conclude that whatever combination of mechanisms are responsible for morality, either they are not neuroanatomically localised, or else their localisation is not manifested in grey matter volume. Future research should employ phylogenetically informed a priori predictions, as well as alternative measures of morality and of brain function.

Sunday, August 14, 2022

Political conspiracy theories as tools for mobilization and signaling

Marie, A., & Petersen, M. B. (2022).
Current Opinion in Psychology, 101440

Abstract

Political conspiracist communities emerge and bind around hard-to-falsify narratives about political opponents or elites convening to secretly exploit the public in contexts of perceived political conflict. While the narratives appear descriptive, we propose that their content as well as the cognitive systems regulating their endorsement and dissemination may have co-evolved, at least in part, to reach coalitional goals: To drive allies’ attention to the social threat to increase their commitment and coordination for collective action, and to signal devotion to gain within-group status. Those evolutionary social functions may be best fulfilled if individuals endorse the conspiratorial narrative sincerely.

Highlights

•  Political conspiracist groups unite around clear-cut and hard-to-falsify narratives about political opponents or elites secretly organizing to deceive and exploit the public.

•  Such social threat-based narratives and the cognitive systems that regulate them may have co-evolved, at least in part, to serve social rather than epistemic functions: facilitating ingroup recruitment, coordination, and signaling for cooperative benefits.

•  While social in nature, those adaptive functions may be best fulfilled if group leaders and members endorse conspiratorial narratives sincerely.

Conclusions

Political conspiracy theories are cognitively attractive, hard-to-falsify narratives about the secret misdeeds of political opponents and elites. While descriptive in appearance, endorsement and expression of those narratives may be regulated, at least partly, by cognitive systems pursuing social goals: to attract attention of allies towards a social threat to enhance commitment and coordination for joint action (in particular, in conflict), and signal devotion to gain within-group status.

Rather than constituting a special category of cultural beliefs, we see political conspiracy theories as part of a wider family of abstract ideological narratives denouncing how an evil, villains, or oppressive system—more or less real and clearly delineated—exploit a virtuous victim group. This family also comprises anti-capitalistic vs. anti-communist or religious propaganda, white supremacist vs. anti-racist discourses, etc. Future research should explore the content properties that make those threat-based narratives compelling; the balance between their hypothetical social functions of signaling, commitment, and coordination enhancers; and the factors moderating their spread (such as intellectual humility and beliefs that the outgroup does not hate the ingroup).

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375

Abstract

Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.

Conclusion

In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Friday, August 12, 2022

Cross-Cultural Differences and Similarities in Human Value Instantiation

Hanel PHP, Maio GR, et al. (2018).
Front. Psychol., 29 May 2018
Sec.Personality and Social Psychology
https://doi.org/10.3389/fpsyg.2018.00849

Abstract

Previous research found that the within-country variability of human values (e.g., equality and helpfulness) clearly outweighs between-country variability. Across three countries (Brazil, India, and the United Kingdom), the present research tested in student samples whether between-nation differences reside more in the behaviors used to concretely instantiate (i.e., exemplify or understand) values than in their importance as abstract ideals. In Study 1 (N = 630), we found several meaningful between-country differences in the behaviors that were used to concretely instantiate values, alongside high within-country variability. In Study 2 (N = 677), we found that participants were able to match instantiations back to the values from which they were derived, even if the behavior instantiations were spontaneously produced only by participants from another country or were created by us. Together, these results support the hypothesis that people in different nations can differ in the behaviors that are seen as typical as instantiations of values, while holding similar ideas about the abstract meaning of the values and their importance.

Conclusion

Overall, Study 1 revealed that most examples that are spontaneously attached to values vary in how much they are shaped by context. In most cases, within-country variability outweighed between-country differences. Nevertheless, many of the instances for which between-country differences were found could be linked to contextual factors. In Study 2, we found that most instantiations that had been spontaneously produced by participants in another country could reliably be matched to the values that they exemplified. Taken together, our results further challenge “the prevailing conception of culture as shared meaning system” (Schwartz, 2014, p. 5), as long as culture is equated with country or nation: the within-country variability outweighs the between-country variability, similar to values on an abstract level (Fischer and Schwartz, 2011). In other words, people endorse the same values to a similar extent across countries and also instantiate them similarly. We hope this research helps to lay a foundation for future research examining these differences and their implications for intercultural understanding and communication.