Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Saturday, April 16, 2022

Morality, punishment, and revealing other people’s secrets.

Salerno, J. M., & Slepian, M. L. (2022).
Journal of Personality & Social Psychology, 
122(4), 606–633. 
https://doi.org/10.1037/pspa0000284

Abstract

Nine studies represent the first investigation into when and why people reveal other people’s secrets. Although people keep their own immoral secrets to avoid being punished, we propose that people will be motivated to reveal others’ secrets to punish them for immoral acts. Experimental and correlational methods converge on the finding that people are more likely to reveal secrets that violate their own moral values. Participants were more willing to reveal immoral secrets as a form of punishment, and this was explained by feelings of moral outrage. Using hypothetical scenarios (Studies 1, 3–6), two controversial events in the news (hackers leaking citizens’ private information; Study 2a–2b), and participants’ behavioral choices to keep or reveal thousands of diverse secrets that they learned in their everyday lives (Studies 7–8), we present the first glimpse into when, how often, and one explanation for why people reveal others’ secrets. We found that theories of self-disclosure do not generalize to others’ secrets: Across diverse methodologies, including real decisions to reveal others’ secrets in everyday life, people reveal others’ secrets as punishment in response to moral outrage elicited from others’ secrets.

From the Discussion

Our data serve as a warning flag: one should be aware of a potential confidant’s views with regard to the morality of the behavior. Across 14 studies (Studies 1–8; Supplemental Studies S1–S5), we found that people are more likely to reveal other people’s secrets to the degree that they, personally, view the secret act as immoral. Emotional reactions to the immoral secrets explained this effect, such as moral outrage as well as anger and disgust, which were associated correlationally and experimentally with revealing the secret as a form of punishment. People were significantly more likely to reveal the same secret if the behavior was done intentionally (vs. unintentionally), if it had gone unpunished (vs. already punished by someone else), and in the context of a moral framing (vs. no moral framing). These experiments suggest a causal role for both the degree to which the secret behavior is immoral and the participants’ desire to see the behavior punished.  Additionally, we found that this psychological process did not generalize to non-secret information. Although people were more likely to reveal both secret and non-secret information when they perceived it to be more immoral, they did so for different reasons: as an appropriate punishment for the immoral secrets, and as interesting fodder for gossip for the immoral non-secrets.

Friday, April 15, 2022

Strategic identity signaling in heterogeneous networks

T. Van der dos, M. Galesic, et al.
PNAS, 2022.
119 (10) e2117898119

Abstract

Individuals often signal identity information to facilitate assortment with partners who are likely to share norms, values, and goals. However, individuals may also be incentivized to encrypt their identity signals to avoid detection by dissimilar receivers, particularly when such detection is costly. Using mathematical modeling, this idea has previously been formalized into a theory of covert signaling. In this paper, we provide an empirical test of the theory of covert signaling in the context of political identity signaling surrounding the 2020 US presidential elections. To identify likely covert and overt signals on Twitter, we use methods relying on differences in detection between ingroup and outgroup receivers. We strengthen our experimental predictions with additional mathematical modeling and examine the usage of selected covert and overt tweets in a behavioral experiment. We find that participants strategically adjust their signaling behavior in response to the political constitution of their audiences. These results support our predictions and point to opportunities for further theoretical development. Our findings have implications for our understanding of political communication, social identity, pragmatics, hate speech, and the maintenance of cooperation in diverse populations.

Significance

Much of online conversation today consists of signaling one’s political identity. Although many signals are obvious to everyone, others are covert, recognizable to one’s ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

From the Discussion

The theory predicts that individuals should use more covert signaling in more heterogeneous groups or when they are in the minority. We found support for this prediction in the ways people shared political speech in a behavioral experiment. We observed the highest levels of covert signaling when audiences consisted almost entirely of cross-partisans, supporting the notion that covert signaling is a strategy for avoiding detection by hostile outgroup members. Of note, we selected tweets for our study at a time of heightened partisan divisions: the four weeks preceding the 2020 US presidential election. Consequently, these tweets mostly discussed the opposing political party. This focus was reflected in our behavioral experiment, in which we did not observe an effect of audience composition when all members were (more or less extreme) copartisans. In that societal context, participants might have perceived the cost of dislikes to be minimal and have likely focused on partisan disputes in their real-life conversations happening around that time. Future work testing the theory of covert signaling should also examine signaling strategies in copartisan conversations during times of salient intragroup political divisions.


Editor's Note: Wondering if this research generalizes into other covert forms of communication during psychotherapy.

Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Wednesday, April 13, 2022

Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors.

Marie, A., & Petersen, M. (2022, March 4). 
https://doi.org/10.31219/osf.io/k7u68

Abstract

Many assume that if citizens become more inclined to moralize the values of evidence-based and logical thinking, political hostility and conspiracy theories would be less widespread.  Across two large surveys (N = 3675) run in the U.S.A. of 2021 (one exploratory and one preregistered), we provide the first demonstration that moralization of rationality can actually stimulate the spread of conspiratorial and hostile news. This reflects the fact that the moralization of rationality can be highly interrelated with status seeking, corroborating arguments that self-enhancing strategies often advance hidden behind claims to objectivity and morality. In contrast to moral grandstanding on the issue of rationality, our studies find robust evidence that intellectual humility may immunize people from sharing and believing hostile  conspiratorial news (i.e. the awareness that intuitions are fallible, and that suspending critique is often desirable). All associations generalized to hostile conspiratorial news both “fake” and anchored in real events.

General Discussion

Many observers assume that citizens more morally sensitized to the values of evidence-based and methodic thinking would be better protected from the perils of political polarization, conspiracy theories, and “fake news.”Yet, attention to the discourse of individuals who pass along politically hostile and conspiratorial claims suggests that they often sincerely believe to be free and independent “critical thinkers”, and to care more about“facts” than the “unthinking sheep” to which they assimilate most of the population (Harambam & Aupers, 2017).

Across two  large online  surveys (N  = 3675) conducted in  the  context  of the  highly polarized  U.S.A. of  2021, we provide the first piece of evidence that moralizing  epistemic rationality—a motivation   for   rationality defined in the abstract—may stimulate the dissemination of hostile conspiratorial views. Specifically, respondents who reported viewing the grounding of one’s beliefs in evidence and logic as amoral virtue(StÃ¥hl et al., 2016) were more  likely to share hostile conspiratorial news to their political  opponents on social  media than individuals low on  this trait.  Importantly, the effect generalized to two types of news stories overtly targeting the  participant’s outgroup: (false) news making entirely fabricated.

Tuesday, April 12, 2022

The Affective Harm Account (AHA) of Moral Judgment: Reconciling Cognition and Affect, Dyadic Morality and Disgust, Harm and Purity

Kurt Gray, Jennifer K. MacCormack, et al.
In Press (2022)
Journal of Personality and Social Psychology

Abstract

Moral psychology has long debated whether moral judgment is rooted in harm vs. affect. We reconcile this debate with the Affective Harm Account (AHA) of moral judgment. The AHA understands harm as an intuitive perception (i.e., perceived harm), and divides “affect” into two: embodied visceral arousal (i.e., gut feelings) and stimulus-directed affective appraisals (e.g., ratings of disgustingness). The AHA was tested in a randomized, double-blind pharmacological experiment with healthy young adults judging the immorality, harmfulness, and disgustingness of everyday moral scenarios (e.g., lying) and unusual purity scenarios (e.g., sex with a corpse) after receiving either a placebo or the beta-blocker propranolol (a drug that dampens visceral arousal). Results confirmed the three key hypotheses of the AHA. First, perceived harm and affective appraisals are neither competing nor independent but intertwined. Second, although
both perceived harm and affective appraisals predict moral judgment, perceived harm is consistently relevant across all scenarios (in line with the Theory of Dyadic Morality), whereas affective appraisals are especially relevant in unusual purity scenarios (in line with affect-as-information theory). Third, the “gut feelings” of visceral arousal are not as important to morality as often believed. Dampening visceral arousal (via propranolol) did not directly impact moral judgment, but instead changed the relative contribution of affective appraisals to moral judgment—and only in unusual purity scenarios. By embracing a constructionist view of the mind that blurs traditional dichotomies, the AHA reconciles historic harm-centric and current affect-centric theories, parsimoniously explaining judgment differences across various moral scenarios without requiring any “moral foundations.”

Discussion

Moral psychology has long debated whether moral judgment is grounded in affect or harm. Seeking to reconcile these apparently competing perspectives, we have proposed an Affective Harm Account (AHA) of moral judgment. This account is conciliatory because it highlights the importance of both perceived harm and affect, not as competing considerations but as joint partners—two different horses yoked together pulling the cart of moral judgment.

The AHA also adds clarity to the previously murky nature of “affect” in moral psychology, differentiating it both in nature and measurement as (at least) two phenomena—embodied, free-floating, visceral arousal (i.e., “gut feelings”) and self-reported, context-bound, affective appraisals (i.e., “this situation is gross”). The importance of affect in moral judgment—especially the “gut feelings” of visceral arousal—was tested via administration of propranolol, which dampens visceral arousal via beta-adrenergic receptor blockade. Importantly, propranolol allows us to manipulate more general visceral arousal (rather than targeting a specific organ, like the gut, or a specific state, like nausea). This increases the potential generalizability of these findings to other moral scenarios (beyond disgust) where visceral arousal might be relevant. We measured the effect of propranolol (vs. placebo) on ratings of moral condemnation, perceived harm, and affective appraisals (i.e., operationalized as ratings of disgust, as in much past work). These ratings were obtained for both everyday moral scenarios (Hofmann et al., 2018)—which are dyadic in structure and thus obviously linked to harm—and for unusual purity scenarios, which are frequently linked to affective appraisals of disgust (Horberg et al., 2009). This study offers support for the three hypotheses of the AHA.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Sunday, April 10, 2022

The habituation fallacy: Disaster victims who are repeatedly victimized are assumed to suffer less, and they are helped less

Hanna Zagefka
European Journal of Social Psychology
First published: 09 February 2022

Abstract

This paper tests the effects of lay beliefs that disaster victims who have been victimized by other events in the past will cope better with a new adverse event than first-time victims. It is shown that believing that disaster victims can get habituated to suffering reduces helping intentions towards victims of repeated adversity, because repeatedly victimized victims are perceived to be less traumatized by a new adverse event. In other words, those who buy into habituation beliefs will impute less trauma and suffering to repeated victims compared to first-time victims, and they will therefore feel less inclined to help those repeatedly victimized victims. This was demonstrated in a series of six studies, two of which were preregistered (total N = 1,010). Studies 1, 2 and 3 showed that beliefs that disaster victims become habituated to pain do indeed exist among lay people. Such beliefs are factually inaccurate, because repeated exposure to severe adversity makes it harder, not easier, for disaster victims to cope with a new negative event. Therefore, we call this belief the ‘habituation fallacy’. Studies 2, 3 and 4 demonstrated an indirect negative effect of a belief in the ‘habituation fallacy’ on ‘helping intentions’, via lesser ‘trauma’ ascribed to victims who had previously been victimized. Studies 5 and 6 demonstrated that a belief in the ‘habituation fallacy’ causally affects trauma ascribed to, and helping intentions towards, repeatedly victimized victims, but not first-time victims. The habituation fallacy can potentially explain reluctance to donate to humanitarian causes in those geographical areas that frequently fall prey to disasters.

From the General Discussion

Taken together, these studies show a tendency to believe in the habituation fallacy. That is, they might believe that victims who have previously suffered are less affected by new adversity than victims who are first-time sufferers. Buy-in to the habituation fallacy means that victims of repeated adversity are assumed to suffer less, and that they are consequently helped less. Consistent evidence for this was found across
six studies, two of which were preregistered.

These results are important and add to the extant literature in significant ways.  Many factors have been discussed as driving disaster giving (see e.g., Albayrak, Aydemir, & Gleibs, 2021; Bekkers & Wiepking, 2011; Berman et al., 2018; Bloom, 2017; Cuddy et al., 2007; Dickert et al., 2011; Evangelidis & Van den Bergh, 2013; Hsee et al., 2013; Kogut, 2011; Kogut et al., 2015; van Leeuwen & Täuber, 2012; Zagefka & James, 2015).  Significant perceived suffering caused by an event is clearly a powerful factor that propels donors into action. However, although lay beliefs about disasters have been studied, lay beliefs about suffering by the victims have been neglected so far. Moreover, although clearly some areas of the world are visited more frequently by disasters than others, the potential effects of this on helping decisions have not previously been studied.

The present paper therefore addresses an important gap, by linking lay beliefs about disasters to both perceived previous victimization and perceived suffering of the victims.  Clearly, helping decisions are driven by emotional and often biased factors (Bloom, 2017), and this contribution sheds light on an important mechanism that is both affective and potentially biased in nature, thereby advancing our understanding of donor motivations (Chapman et al., 2020).