Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Overconfidence. Show all posts
Showing posts with label Overconfidence. Show all posts

Sunday, October 15, 2023

Bullshit blind spots: the roles of miscalibration and information processing in bullshit detection

Shane Littrell & Jonathan A. Fugelsang
(2023) Thinking & Reasoning
DOI: 10.1080/13546783.2023.2189163

Abstract

The growing prevalence of misleading information (i.e., bullshit) in society carries with it an increased need to understand the processes underlying many people’s susceptibility to falling for it. Here we report two studies (N = 412) examining the associations between one’s ability to detect pseudo-profound bullshit, confidence in one’s bullshit detection abilities, and the metacognitive experience of evaluating potentially misleading information. We find that people with the lowest (highest) bullshit detection performance overestimate (underestimate) their detection abilities and overplace (underplace) those abilities when compared to others. Additionally, people reported using both intuitive and reflective thinking processes when evaluating misleading information. Taken together, these results show that both highly bullshit-receptive and highly bullshit-resistant people are largely unaware of the extent to which they can detect bullshit and that traditional miserly processing explanations of receptivity to misleading information may be insufficient to fully account for these effects.


Here's my summary:

The authors of the article argue that people have two main blind spots when it comes to detecting bullshit: miscalibration and information processing. Miscalibration is the tendency to overestimate our ability to detect bullshit. We think we're better at detecting bullshit than we actually are.

Information processing is the way that we process information in order to make judgments. The authors argue that we are more likely to be fooled by bullshit when we are not paying close attention or when we are processing information quickly.

The authors also discuss some strategies for overcoming these blind spots. One strategy is to be aware of our own biases and limitations. We should also be critical of the information that we consume and take the time to evaluate evidence carefully.

Overall, the article provides a helpful framework for understanding the challenges of bullshit detection. It also offers some practical advice for overcoming these challenges.

Here are some additional tips for detecting bullshit:
  • Be skeptical of claims that seem too good to be true.
  • Look for evidence to support the claims that are being made.
  • Be aware of the speaker or writer's motives.
  • Ask yourself if the claims are making sense and whether they are consistent with what you already know.
  • If you're not sure whether something is bullshit, it's better to err on the side of caution and be skeptical.

Saturday, October 14, 2023

Overconfidently conspiratorial: Conspiracy believers are dispositionally overconfident and massively overestimate how much others agree with them

Pennycook, G., Binnendyk, J., & Rand, D. G. 
(2022, December 5). PsyArXiv

Abstract

There is a pressing need to understand belief in false conspiracies. Past work has focused on the needs and motivations of conspiracy believers, as well as the role of overreliance on intuition. Here, we propose an alternative driver of belief in conspiracies: overconfidence. Across eight studies with 4,181 U.S. adults, conspiracy believers not only relied more intuition, but also overestimated their performance on numeracy and perception tests (i.e. were overconfident in their own abilities). This relationship with overconfidence was robust to controlling for analytic thinking, need for uniqueness, and narcissism, and was strongest for the most fringe conspiracies. We also found that conspiracy believers – particularly overconfident ones – massively overestimated (>4x) how much others agree with them: Although conspiracy beliefs were in the majority in only 12% of 150 conspiracies across three studies, conspiracy believers thought themselves to be in the majority 93% of the time.

Here is my summary:

The research found that people who believe in conspiracy theories are more likely to be overconfident in their own abilities and to overestimate how much others agree with them. This was true even when controlling for other factors, such as analytic thinking, need for uniqueness, and narcissism.

The researchers conducted a series of studies to test their hypothesis. In one study, they found that people who believed in conspiracy theories were more likely to overestimate their performance on numeracy and perception tests. In another study, they found that people who believed in conspiracy theories were more likely to overestimate how much others agreed with them about a variety of topics, including climate change and the 2016 US presidential election.

The researchers suggest that overconfidence may play a role in the formation and maintenance of conspiracy beliefs. When people are overconfident, they are more likely to dismiss evidence that contradicts their beliefs and to seek out information that confirms their beliefs. This can lead to a "filter bubble" effect, where people are only exposed to information that reinforces their existing beliefs.

The researchers also suggest that overconfidence may lead people to overestimate how much others agree with them about their conspiracy beliefs. This can make them feel more confident in their beliefs and less likely to question them.

The findings of this research have implications for understanding and addressing the spread of conspiracy theories. It is important to be aware of the role that overconfidence may play in the formation and maintenance of conspiracy beliefs. This knowledge can be used to develop more effective interventions to prevent people from falling for conspiracy theories and to help people who already believe in conspiracy theories to critically evaluate their beliefs.

Saturday, September 16, 2023

A Metacognitive Blindspot in Intellectual Humility Measures

Costello, T. H., Newton, C., Lin, H., & Pennycook, G.
(2023, August 6).

Abstract

Intellectual humility (IH) is commonly defined as recognizing the limits of one’s knowledge and abilities. However, most research has relied entirely on self-report measures of IH, without testing whether these instruments capture the metacognitive core of the construct. Across two studies (Ns = 898; 914), using generalized additive mixed models to detect complex non-linear interactions, we evaluated the correspondence between widely used IH self-reports and performance on calibration and resolution paradigms designed to model the awareness of one’s mental capabilities (and their fallibility). On an overconfidence paradigm (N observations per model = 2,692-2,742), none of five IH measures attenuated the Dunning-Kruger effect, whereby poor performers overestimate their abilities and high performers underestimate them. On a confidence-accuracy paradigm (Nobservation per model = 7,223 - 12,706), most IH measures were associated with inflated confidence regardless of accuracy, or were specifically related to confidence when participants were correct but not when they were incorrect. The sole exception was the “Lack of Intellectual Overconfidence” subscale of the Comprehensive Intellectual Humility Scale, which uniquely predicted lower confidence for incorrect responses. Meanwhile, measures of Actively Open-minded Thinking reliably predicted calibration and resolution. These findings reveal substantial discrepancies between IH self-reports and metacognitive abilities, suggesting most IH measures lack validity. It may not be feasible to assess IH via self-report–as indicating a great deal of humility may, itself, be a sign of a failure in humility.

GeneralDiscussion

IH represents the ability to identify the constraints of one’s psychological, epistemic, and cultural perspective— to conduct lay phenomenology, acknowledging that the default human perspective is (literally) self-centered (Wallace, 2009) — and thereby cultivate an awareness of the limits of a single person, theory, or ideology to describe the vast and searingly complex universe. It is a process that presumably involves effortful and vigilant noticing – tallying one’s epistemic track record, and especially one’s fallibility (Ballantyne, 2021).

IH, therefore, manifests dynamically in individuals as a boundary between one’s informational environment and one’s model of reality. This portrait of IH-as-boundary appears repeatedly in philosophical and psychological treatments of IH, which frequently frame awareness of (epistemic) limitations as IH’s conceptual, metacognitive core (Leary et al., 2017; Porter, Elnakouri, et al., 2022). Yet as with a limit in mathematics, epistemic limits are appropriately defined as functions: their value is dependent on inputs (e.g., information environment, access to knowledge) that vary across contexts and individuals. Particularly, measuring IH requires identifying at least two quantities— one’s epistemic capabilities and one’s appraisal of said capabilities— from which a third, IH-qua-metacognition, can be derived as the distance between the two quantities.

Contemporary IH self-reports tend not to account for either parameter, seeming to rest instead on an auxiliary assumption: That people who are attuned to, and “own”, their epistemic limitations will generate characteristic, intellectually humble patterns of thinking and behavior. IH questionnaires then target these patterns, rather than the shared propensity for IH which the patterns ostensibly reflect.

We sought to both test and circumvent this assumption (and mono-method measurement limitation) in the present research. We did so by defining IH’s metacognitive core, functionally and statistically, in terms of calibration and resolution. We operationalized calibration as the convergence between participants’ performance on a series of epistemic tasks, on the one hand, and participants’ estimation of their own performance, on the other. Given that the relation between self-estimation and actual performance is non-linear (i.e., the Dunning-Kruger effect), there were several pathways by which IH might predict calibration: (1) decreased overestimation among low performers, (2) decreased underestimation among high performers, or (3) unilateral weakening of miscalibration among both low and high performers (for a visual representation, refer to Figure 1). Further, we operationalized epistemic resolution by assessing the relation between IH, on the one hand, individuals’ item-by-item confidence judgments for correct versus incorrect answers, on the other hand. Thus, resolution represents the capacity to distinguish between one’s correct and incorrect judgments and beliefs (a seemingly necessary prerequisite for building an accurate and calibrated model of one’s knowledge).

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Friday, March 24, 2023

Psychological Features of Extreme Political Ideologies

van Prooijen, J.-W., & Krouwel, A. P. M. (2019).
Current Directions in Psychological Science, 
28(2), 159–163. 
https://doi.org/10.1177/0963721418817755

Abstract

In this article, we examine psychological features of extreme political ideologies. In what ways are political left- and right-wing extremists similar to one another and different from moderates? We propose and review four interrelated propositions that explain adherence to extreme political ideologies from a psychological perspective. We argue that (a) psychological distress stimulates adopting an extreme ideological outlook; (b) extreme ideologies are characterized by a relatively simplistic, black-and-white perception of the social world; (c) because of such mental simplicity, political extremists are overconfident in their judgments; and (d) political extremists are less tolerant of different groups and opinions than political moderates. In closing, we discuss how these psychological features of political extremists increase the likelihood of conflict among groups in society.

Discussion

The four psychological features discussed here suggest that political extremism is fueled by feelings of distress and is reflected in cognitive simplicity, overconfidence, and intolerance. These insights are important to understanding how political polarization increases political instability and the likelihood of conflict between groups in society. Excessive confidence in the moral superiority of one’s own ideological beliefs impedes meaningful interaction and cooperation with different ideological groups and structures political decision making as a zero-sum game with winners and losers. Strong moral convictions consistently decrease people’s ability to compromise and even increase a willingness to use violence to reach ideological goals (Skitka, 2010). These processes are exacerbated by people’s tendency to selectively expose themselves to people and ideas that validate their own convictions. For instance, both information and misinformation selectively spread in online echo chambers of like-minded people (Del Vicario et al., 2016).

This article extends current insights in at least three ways. First, the features proposed here help to explain why throughout the past century not only extreme-right but also extreme-left movements (e.g., socialism, communism) have thrived in times of crisis (Midlarsky, 2011). Second, understanding the mind-set of extremists in all corners of the political spectrum is important in times of polarization and populist rhetoric. The current propositions provide insights into why traditionally moderate parties in the EU have suffered substantial electoral losses. In particular, the support for well-established parties on the moderate left (e.g., social democrats) and moderate right (e.g., Christian democrats) has dropped in recent years, whereas the support for left- and right-wing populist parties has increased (Krouwel, 2012). Third, the present arguments are based on evidence from multiple countries with different political systems (van Prooijen & Krouwel, 2017), which suggests that they apply to both two-party systems (e.g., the United States) and multiparty systems (e.g., many European countries).

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118

Abstract

We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.

Significance

Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.

Saturday, March 13, 2021

The Dynamics of Motivated Beliefs

Zimmermann, Florian. 2020.
American Economic Review, 110 (2): 337-61.

Abstract
A key question in the literature on motivated reasoning and self-deception is how motivated beliefs are sustained in the presence of feedback. In this paper, we explore dynamic motivated belief patterns after feedback. We establish that positive feedback has a persistent effect on beliefs. Negative feedback, instead, influences beliefs in the short run, but this effect fades over time. We investigate the mechanisms of this dynamic pattern, and provide evidence for an asymmetry in the recall of feedback. Finally, we establish that, in line with theoretical accounts, incentives for belief accuracy mitigate the role of motivated reasoning.

From the Discussion

In light of the finding that negative feedback has only limited effects on beliefs in the long run, the question arises as to whether people should become entirely delusional about themselves over time. Note that results from the incentive treatments highlight that incentives for recall accuracy bound the degree of self-deception and thereby possibly prevent motivated agents from becoming entirely delusional. Further note that there exists another rather mechanical counterforce, which is that the perception of feedback likely changes as people become more confident. In terms of the experiment, if a subject believes that the chances of ranking in the upper half are mediocre, then that subject will likely perceive two comparisons out of three as positive feedback. If, instead, the same subject is almost certain they rank in the upper half, then that subject will likely perceive the same feedback as rather negative. Note that this “perception effect” is reflected in the Bayesian definition of feedback that we report as a robustness check in the Appendix of the paper. An immediate consequence of this change in perception is that the more confident an agent becomes, the more likely it is that they will obtain negative feedback. Unless an agent does not incorporate negative feedback at all, this should act as a force that bounds people’s delusions.

Sunday, February 7, 2021

How people decide what they want to know

Sharot, T., Sunstein, C.R. 
Nat Hum Behav 4, 14–19 (2020). 

Abstract

Immense amounts of information are now accessible to people, including information that bears on their past, present and future. An important research challenge is to determine how people decide to seek or avoid information. Here we propose a framework of information-seeking that aims to integrate the diverse motives that drive information-seeking and its avoidance. Our framework rests on the idea that information can alter people’s action, affect and cognition in both positive and negative ways. The suggestion is that people assess these influences and integrate them into a calculation of the value of information that leads to information-seeking or avoidance. The theory offers a framework for characterizing and quantifying individual differences in information-seeking, which we hypothesize may also be diagnostic of mental health. We consider biases that can lead to both insufficient and excessive information-seeking. We also discuss how the framework can help government agencies to assess the welfare effects of mandatory information disclosure.

Conclusion

It is increasingly possible for people to obtain information that bears on their future prospects, in terms of health, finance and even romance. It is also increasingly possible for them to obtain information about the past, the present and the future, whether or not that information bears on their personal lives. In principle, people’s decisions about whether to seek or avoid information should depend on some integration of instrumental value, hedonic value and cognitive value. But various biases can lead to both insufficient and excessive information-seeking. Individual differences in information-seeking may reflect different levels of susceptibility to those biases, as well as varying emphasis on instrumental, hedonic and cognitive utility.  Such differences may also be diagnostic of mental health.

Whether positive or negative, the value of information bears directly on significant decisions of government agencies, which are often charged with calculating the welfare effects of mandatory disclosure and which have long struggled with that task. Our hope is that the integrative framework of information-seeking motives offered here will facilitate these goals and promote future research in this important domain.

Tuesday, January 19, 2021

Escape the echo chamber

C Thi Nguyen
aeon.co
Originally published  9 April 18

Here is an excerpt:

Let’s start with epistemic bubbles. They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017). The general gist: we get much of our news from Facebook feeds and similar sorts of social media. Our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views. We visit our favourite like-minded blogs and websites. At the same time, various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.

Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced. That’s why we all depend on extended social networks to deliver us knowledge. But any such informational network needs the right sort of broadness and variety to work. A social network composed entirely of incredibly smart, obsessive opera fans would deliver all the information I could want about the opera scene, but it would fail to clue me in to the fact that, say, my country had been infested by a rising tide of neo-Nazis. Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.

Epistemic bubbles also threaten us with a second danger: excessive self-confidence. In a bubble, we will encounter exaggerated amounts of agreement and suppressed levels of disagreement. We’re vulnerable because, in general, we actually have very good reason to pay attention to whether other people agree or disagree with us. Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly. This is why we might do our homework in study groups, and have different laboratories repeat experiments. But not all forms of corroboration are meaningful. Ludwig Wittgenstein says: imagine looking through a stack of identical newspapers and treating each next newspaper headline as yet another reason to increase your confidence. This is obviously a mistake. The fact that The New York Times reports something is a reason to believe it, but any extra copies of The New York Times that you encounter shouldn’t add any extra evidence.

Friday, July 17, 2020

Ivanka Trump's love for Goya beans violates ethics rules, say US rights groups

ImageAssociated Press
Originally posted 15 July 2020

The White House has defended Ivanka Trump tweeting a photo of herself holding up a can of Goya beans to buck up a Hispanic-owned business that she says has been unfairly treated, arguing she had “every right” to publicly express her support.

Government watchdogs countered that President Donald Trump’s daughter and senior adviser doesn’t have the right to violate ethics rules that bar government officials from using their public office to endorse specific products or groups.

These groups contend Ivanka Trump’s action also highlights broader concerns about how the president and those around him often blur the line between politics and governing. The White House would be responsible for disciplining Ivanka Trump for any ethics violation but chose not to in a similar case involving White House counselor Kellyanne Conway in 2017.

Goya became the target of a consumer boycott after CEO Robert Unanue praised the president at a Hispanic event at the White House on Thursday last week.

Trump tweeted the next day about his “love” for Goya, and his daughter followed up late Tuesday by tweeting a photo of herself holding a can of Goya black beans with a caption that read, “If it’s Goya, it has to be good,” in English and Spanish.

The info is here.

Thursday, July 16, 2020

Cognitive Bias and Public Health Policy During the COVID-19 Pandemic

Halpern SD, Truog RD, and Miller FG.
JAMA. 
Published online June 29, 2020.
doi:10.1001/jama.2020.11623

Here is an excerpt:

These cognitive errors, which distract leaders from optimal policy making and citizens from taking steps to promote their own and others’ interests, cannot merely be ascribed to repudiations of science. Rather, these biases are pervasive and may have been evolutionarily selected. Even at academic medical centers, where a premium is placed on having science guide policy, COVID-19 action plans prioritized expanding critical care capacity at the outset, and many clinicians treated seriously ill patients with drugs with little evidence of effectiveness, often before these institutions and clinicians enacted strategies to prevent spread of disease.

Identifiable Lives and Optimism Bias

The first error that thwarts effective policy making during crises stems from what economists have called the “identifiable victim effect.” Humans respond more aggressively to threats to identifiable lives, ie, those that an individual can easily imagine being their own or belonging to people they care about (such as family members) or care for (such as a clinician’s patients) than to the hidden, “statistical” deaths reported in accounts of the population-level tolls of the crisis. Similarly, psychologists have described efforts to rescue endangered lives as an inviolable goal, such that immediate efforts to save visible lives cannot be abandoned even if more lives would be saved through alternative responses.

Some may view the focus on saving immediately threatened lives as rational because doing so entails less uncertainty than policies designed to save invisible lives that are not yet imminently threatened. Individuals who harbor such instincts may feel vindicated knowing that during the present pandemic, few if any patients in the US who could have benefited from a ventilator were denied one.

Yet such views represent a second reason for the broad endorsement of policies that prioritize saving visible, immediately jeopardized lives: that humans are imbued with a strong and neurally mediated3 tendency to predict outcomes that are systematically more optimistic than observed outcomes. Early pandemic prediction models provided best-case, worst-case, and most-likely estimates, fully depicting the intrinsic uncertainty.4 Sound policy would have attempted to minimize mortality by doing everything possible to prevent the worst case, but human optimism bias led many to act as if the best case was in fact the most likely.

The info is here.

Monday, March 2, 2020

The Dunning-Kruger effect, or why the ignorant think they’re experts

Alexandru Micu
zmescience.com
Originally posted 13 Feb 20

Here is an excerpt:

It’s not specific only to technical skills but plagues all walks of human existence equally. One study found that 80% of drivers rate themselves as above average, which is literally impossible because that’s not how averages work. We tend to gauge our own relative popularity the same way.

It isn’t limited to people with low or nonexistent skills in a certain matter, either — it works on pretty much all of us. In their first study, Dunning and Kruger also found that students who scored in the top quartile (25%) routinely underestimated their own competence.

A fuller definition of the Dunning-Kruger effect would be that it represents a bias in estimating our own ability that stems from our limited perspective. When we have a poor or nonexistent grasp on a topic, we literally know too little of it to understand how little we know. Those who do possess the knowledge or skills, however, have a much better idea of where they sit. But they also think that if a task is clear and simple to them, it must be so for everyone else as well.

A person in the first group and one in the second group are equally liable to use their own experience and background as the baseline and kinda just take it for granted that everyone is near that baseline. They both partake in the “illusion of confidence” — for one, that confidence is in themselves, for the other, in everyone else.

The info is here.

Thursday, October 3, 2019

Deception and self-deception

Peter Schwardmann and Joel van der Weele
Nature Human Behaviour (2019)

Abstract

There is ample evidence that the average person thinks he or she is more skillful, more beautiful and kinder than others and that such overconfidence may result in substantial personal and social costs. To explain the prevalence of overconfidence, social scientists usually point to its affective benefits, such as those stemming from a good self-image or reduced anxiety about an uncertain future. An alternative theory, first advanced by evolutionary biologist Robert Trivers, posits that people self-deceive into higher confidence to more effectively persuade or deceive others. Here we conduct two experiments (combined n = 688) to test this strategic self-deception hypothesis. After performing a cognitively challenging task, half of our subjects are informed that they can earn money if, during a short face-to-face interaction, they convince others of their superior performance. We find that the privately elicited beliefs of the group that was informed of the profitable deception opportunity exhibit significantly more overconfidence than the beliefs of the control group. To test whether higher confidence ultimately pays off, we experimentally manipulate the confidence of the subjects by means of a noisy feedback signal. We find that this exogenous shift in confidence makes subjects more persuasive in subsequent face-to-face interactions. Overconfidence emerges from these results as the product of an adaptive cognitive technology with important social benefits, rather than some deficiency or bias.

From the Discussion section

The results of our experiment demonstrate that the strategic environment matters for cognition about the self. We observe that deception opportunities increase average overconfidence relative to others, and that, under the right circumstances, increased confidence can pay off. Our data thus support the the idea that overconfidence is strategically employed for social gain.

Our results do not allow for decisive statements about the exact cognitive channels underlying such self-deception. While we find some indications that an aversion to lying increases overconfidence, the evidence is underwhelming.13 When it comes to the ability to deceive others, we find that even when we control for the message, confidence leads to higher evaluations in some conditions. This is  consistent with the idea that self-deception improves the deception technology of contestants, possibly by eliminating non-verbal give-away cues.

The research is here. 

Wednesday, July 31, 2019

The “Fake News” Effect: An Experiment on Motivated Reasoning and Trust in News

Michael Thaler
Harvard University
Originally published May 28, 2019

Abstract

When people receive information about controversial issues such as immigration policies, upward mobility, and racial discrimination, the information often evokes both what they currently believe and what they are motivated to believe. This paper theoretically and experimentally explores the importance in inference of this latter channel: motivated reasoning. In the theory of motivated reasoning this paper develops, people misupdate from information by treating their motivated beliefs as an extra signal. To test the theory, I create a new experimental design in which people make inferences about the veracity of news sources. This design is unique in that it identifies motivated reasoning from Bayesian updating and confirmation bias, and doesn’t require elicitation of people’s entire belief distribution. It is also very portable: In a large online experiment, I find the first identifying evidence for politically-driven motivated reasoning on eight different economic and social issues. Motivated reasoning leads people to become more polarized, less accurate, and more overconfident in their beliefs about these issues.

From the Conclusion:

One interpretation of this paper is unambiguously bleak: People of all demographics similarly motivatedly reason, do so on essentially every topic they are asked about, and make particularly biased inferences on issues they find important. However, there is an alternative interpretation: This experiment takes a step towards better understanding motivated reasoning, and makes it easier for future work to attenuate the bias. Using this experimental design, we can identify and estimate the magnitude of the bias; future projects that use interventions to attempt to mitigate motivated reasoning can use this estimated magnitude as an outcome variable. Since the bias does decrease utility in at least some settings, people may have demand for such interventions.

The research is here.

Monday, June 17, 2019

Why High-Class People Get Away With Incompetence

Heather Murphy
The New York Times
Originally posted May 20, 2019

Here are two excerpt:

The researchers suggest that part of the answer involves what they call “overconfidence.” In several experiments, they found that people who came from a higher social class were more likely to have an inflated sense of their skills — even when tests proved that they were average. This unmerited overconfidence, they found, was interpreted by strangers as competence.

The findings highlight yet another way that family wealth and parents’ education — two of a number of factors used to assess social class in the study — affect a person’s experience as they move through the world.

“With this research, we now have reason to think that coming from a higher social class confers yet another advantage,” said Jessica A. Kennedy, a professor of management at Vanderbilt University, who was not involved in the study.

(cut)

Researchers said they hoped that the takeaway was not to strive to be overconfident. Wars, stock market crashes and many other crises can be blamed on overconfidence, they said. So how do managers, employers, voters and customers avoid overvaluing social class and being duped by incompetent wealthy people? Dr. Kennedy said she had been encouraged to find that if you show people actual facts about a person, the elevated status that comes with overconfidence often fades away.

The info is here.

Wednesday, August 1, 2018

65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys

Patrick R. Heck, Daniel J. Simons, Christopher F. Chabris
PLoS One
Originally posted July 3, 2018

Abstract

Psychologists often note that most people think they are above average in intelligence. We sought robust, contemporary evidence for this “smarter than average” effect by asking Americans in two independent samples (total N = 2,821) whether they agreed with the statement, “I am more intelligent than the average person.” After weighting each sample to match the demographics of U.S. census data, we found that 65% of Americans believe they are smarter than average, with men more likely to agree than women. However, overconfident beliefs about one’s intelligence are not always unrealistic: more educated people were more likely to think their intelligence is above average. We suggest that a tendency to overrate one’s cognitive abilities may be a stable feature of human psychology.

The research is here.

Wednesday, April 18, 2018

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Sunday, November 29, 2015

You’re not as virtuous as you think

By Nitin Nohria
The Washington Post
Originally published October 15, 2015

Moral overconfidence is on display in politics, in business, in sports — really, in all aspects of life. There are political candidates who say they won’t use attack ads until, late in the race, they’re behind in the polls and under pressure from donors and advisers, their ads become increasingly negative. There are chief executives who come in promising to build a business for the long-term but then condone questionable accounting gimmickry to satisfy short-term market demands. There are baseball players who shun the use of steroids until they age past their peak performance and start to look for something to slow the decline. These people may be condemned as hypocrites. But they aren’t necessarily bad actors. Often, they’ve overestimated their inherent morality and underestimated the influence of situational factors.

Moral overconfidence is in line with what studies find to be our generally inflated view of ourselves. We rate ourselves as above-average drivers, investors and employees, even though math dictates that can’t be true for all of us. We also tend to believe we are less likely than the typical person to exhibit negative qualities and to experience negative life events: to get divorced, become depressed or have a heart attack.

The entire article is here.

Thursday, August 6, 2015

When Knowledge Knows No Bounds

By Stav Atir, Emily Rosenzweig, and David Dunning
When Knowledge Knows No Bounds
Psychological Science, first published on July 14, 2015
doi:10.1177/0956797615588195

Abstract

People overestimate their knowledge, at times claiming knowledge of concepts, events, and people that do not exist and cannot be known, a phenomenon called overclaiming. What underlies assertions of such impossible knowledge? We found that people overclaim to the extent that they perceive their personal expertise favorably. Studies 1a and 1b showed that self-perceived financial knowledge positively predicts claiming knowledge of nonexistent financial concepts, independent of actual knowledge. Study 2 demonstrated that self-perceived knowledge within specific domains (e.g., biology) is associated specifically with overclaiming within those domains. In Study 3, warning participants that some of the concepts they saw were fictitious did not reduce the relationship between self-perceived knowledge and overclaiming, which suggests that this relationship is not driven by impression management. In Study 4, boosting self-perceived expertise in geography prompted assertions of familiarity with nonexistent places, which supports a causal role for self-perceived expertise in claiming impossible knowledge.

The entire article is here.

Wednesday, August 5, 2015

What would I eliminate if I had a magic wand? Overconfidence’

The psychologist and bestselling author of Thinking, Fast and Slow reveals his new research and talks about prejudice, fleeing the Nazis, and how to hold an effective meeting

By David Shariatmadari
The Guardian
Originally posted on July 18, 2015

Here is an excerpt:

What’s fascinating is that Kahneman’s work explicitly swims against the current of human thought. Not even he believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite. It is the bias he says he would most like to eliminate if he had a magic wand. But it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”.

The entire article is here.