Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Confidence. Show all posts
Showing posts with label Confidence. Show all posts

Sunday, January 15, 2023

How Hedges Impact Persuasion

Oba, Demi and Berger, Jonah A.
(July 23, 2022). 

Abstract

Communicators often hedge. Salespeople say that a product is probably the best, recommendation engines suggest movies they think you’ll like, and consumers say restaurants might have good service. But how does hedging impact persuasion? We suggest that different types of hedges may have different effects. Six studies support our theorizing, demonstrating that (1) the probabilistic likelihood hedges suggest and (2) whether they take a personal (vs. general) perspective both play an important role in driving persuasion. Further, the studies demonstrate that both effects are driven by a common mechanism: perceived confidence. Using hedges associated with higher likelihood, or that involve personal perspective, increases persuasion because they suggest communicators are more confident about what they are saying. This work contributes to the burgeoning literature on language in marketing, showcases how subtle linguistic features impact perceived confidence, and has clear implications for anyone trying to be more persuasive.

General Discussion

Communicating uncertainty is an inescapable part of marketplace interactions. Customer service representatives suggest solutions that “they think”will work, marketers inform buyers about risks a product “may” have, and consumers recommend restaurants that have the best food“in their opinion”.  Such communications are critical in determining which solutions are implemented, which products are bought, and which restaurants are visited.

But while it is clear that hedging is both frequent and important, less is known about its impact.  Do hedges always hurt persuasion?  If not, which hedges more or less persuasive, and why?

Six studies explore these questions. First, they demonstrate that different types of hedges have different effects. Consistent with our theorizing, hedges associated with higher likelihood of occurrence (Studies 1, 2A, 3, and 4A) or that take a personal (rather than general) perspective (Studies 1, 2B, 3, and 4B) are more persuasive. Further, hedges don’t always reduce persuasion (Studies 2A and 2B). Testing these effects using dozens of different hedges, across multiple domains, and using multiple measure of persuasion (including consequential choice) speaks to their robustness and generalizability.

Second, the studies demonstrate a common process that underlies these effects.  When communicators use hedges associated with higher likelihood, or a personal (rather than general) perspective, it makes them seem more confident. This, in turn, increases persuasion (Study 1, 3, 4A and 4B). Demonstrating these effects through mediation (Studies 1, 3, 4A and 4B) and moderation (Studies 4A and 4B) underscores robustness.Further, while other factors may contribute, the studies conducted here indicate full mediation by perceived confidence, highlighting its importance.


Psychologists and other mental health professionals may want to consider this research as part of psychotherapy.

Monday, March 21, 2022

Confidence and gradation in causal judgment

O'Neill, K., Henne, P, et al.
Cognition
Volume 223, June 2022, 105036

Abstract

When comparing the roles of the lightning strike and the dry climate in causing the forest fire, one might think that the lightning strike is more of a cause than the dry climate, or one might think that the lightning strike completely caused the fire while the dry conditions did not cause it at all. Psychologists and philosophers have long debated whether such causal judgments are graded; that is, whether people treat some causes as stronger than others. To address this debate, we first reanalyzed data from four recent studies. We found that causal judgments were actually multimodal: although most causal judgments made on a continuous scale were categorical, there was also some gradation. We then tested two competing explanations for this gradation: the confidence explanation, which states that people make graded causal judgments because they have varying degrees of belief in causal relations, and the strength explanation, which states that people make graded causal judgments because they believe that causation itself is graded. Experiment 1 tested the confidence explanation and showed that gradation in causal judgments was indeed moderated by confidence: people tended to make graded causal judgments when they were unconfident, but they tended to make more categorical causal judgments when they were confident. Experiment 2 tested the causal strength explanation and showed that although confidence still explained variation in causal judgments, it did not explain away the effects of normality, causal structure, or the number of candidate causes. Overall, we found that causal judgments were multimodal and that people make graded judgments both when they think a cause is weak and when they are uncertain about its causal role.

From the General Discussion

The current paper sought to address two major questions regarding singular causal judgments: are causal judgments graded, and if so, what explains this gradation?

(cut)

In other words, people make graded causal judgments both when they think a cause is weak and also when they are uncertain about their causal judgment. Although work is needed to determine precisely why and when causal judgments are influenced by confidence, we have demonstrated that these effects are separable from more well-studied effects on causal judgment. This is good news for theories of causal judgment that rely on the causal strength explanation: these theories do not need to account for the effects of confidence on causal judgment to be useful in explaining other effects. That is, there is no need for major revisions in how we think about causal judgments. Nevertheless, we think our results have important implications for these theories, which we outline below.

Friday, November 26, 2021

Paranoia, self-deception and overconfidence

Rossi-Goldthorpe RA, et al (2021) 
PLoS Comput Biol 17(10): e1009453. 
https://doi.org/10.1371/journal.pcbi.1009453

Abstract

Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants’ perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value–self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.

Author summary

Paranoia is the belief that others intend to harm you. Some people think that paranoia evolved to serve a collational function and should thus be related to the mechanisms of group membership and reputation management. Others have argued that its roots are much more basic, being based instead in how the individual models and anticipates their world–even non-social things. To adjudicate we gave participants a difficult perceptual decision-making task, during which they received advice on what to decide from a partner, who was either a collaborator (in their group) or a competitor (outside of their group). Using computational modeling of participant choices which allowed us to estimate the role of social and non-social processes in the decision, we found that the manipulation worked: people placed a stronger prior weight on the advice from a collaborator compared to a competitor. However, paranoia did not interact with this effect. Instead, paranoia was associated with participants’ beliefs about their own performance. When those beliefs were poor, paranoid participants relied heavily on the advice, even when it contradicted the evidence. Thus, we find a mechanistic link between paranoia, self-deception, and over confidence.

Tuesday, November 16, 2021

Decision Prioritization and Causal Reasoning in Decision Hierarchies

Zylberberg, A. (2021, September 6). 
https://doi.org/10.31234/osf.io/agt5s

Abstract

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 to 7th power latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.

Discussion

Adaptive behavior requires making accurate decisions, but also knowing what decisions are worth making. To study how people decide what to decide on, we investigated a novel task in which people had to find a target, hidden at the lowest level of a decision tree, by gathering stochastic information from the internal nodes of the decision tree. Our central finding is that a small number of heuristic rules explain the participant’s behavior in this complex decision-making task. The study extends the perceptual decision framework to more complex decisions that comprise a hierarchy of sub-decisions of varying levels of difficulty, and where the decision maker has to actively decide which decision to address at any given time.  

Our task can be conceived as a sequence of binary decisions, or as one decision with eight alternatives.  Participants’ behavior supports the former interpretation.  Participants often performed multiple queries on the same node before descending levels, and they rarely made a transition from an internal node to a higher-level one before reaching a leaf node.  This indicates that participants made categorical decisions about the direction of motion at the visited nodes before they decided to descend levels. This bias toward resolving uncertainty locally was not observed in an approximately optimal policy (Fig. 8), and thus may reflect more general cognitive constraints that limit participants’ performance in our task (Markant et al., 2016). A strong candidate is the limited capacity of working memory (Miller, 1956). By reaching a categorical decision at each internal node, participants avoid the need to operate with full probability distributions over all task-relevant variables, favoring instead a strategy in which only the confidence about the motion choices is carried forward to inform future choices (Zylberberg et al., 2011).

Friday, June 18, 2021

Wise teamwork: Collective confidence calibration predicts the effectiveness of group discussion

Silver, I, Mellers, B.A., & Tetlock, P.E.
Journal of Experimental Social Psychology
Volume 96, September 2021.

Abstract

‘Crowd wisdom’ refers to the surprising accuracy that can be attained by averaging judgments from independent individuals. However, independence is unusual; people often discuss and collaborate in groups. When does group interaction improve vs. degrade judgment accuracy relative to averaging the group's initial, independent answers? Two large laboratory studies explored the effects of 969 face-to-face discussions on the judgment accuracy of 211 teams facing a range of numeric estimation problems from geographic distances to historical dates to stock prices. Although participants nearly always expected discussions to make their answers more accurate, the actual effects of group interaction on judgment accuracy were decidedly mixed. Importantly, a novel, group-level measure of collective confidence calibration robustly predicted when discussion helped or hurt accuracy relative to the group's initial independent estimates. When groups were collectively calibrated prior to discussion, with more accurate members being more confident in their own judgment and less accurate members less confident, subsequent group interactions were likelier to yield increased accuracy. We argue that collective calibration predicts improvement because groups typically listen to their most confident members. When confidence and knowledge are positively associated across group members, the group's most knowledgeable members are more likely to influence the group's answers.

Conclusion

People often display exaggerated beliefs about their skills and knowledge. We misunderstand and over-estimate our ability to answer general knowledge questions (Arkes, Christensen, Lai, & Blumer, 1987), save for a rainy day (Berman, Tran, Lynch Jr, & Zauberman, 2016), and resist unhealthy foods (Loewenstein, 1996), to name just a few examples. Such failures of calibration can have serious consequences, hindering our ability to set goals (Kahneman & Lovallo, 1993), make plans (Janis, 1982), and enjoy experiences (Mellers & McGraw, 2004). Here, we show that collective calibration also predicts the effectiveness of group discussions. In the context of numeric estimation tasks, poorly calibrated groups were less likely to benefit from working together, and, ultimately, offered less accurate answers. Group interaction is the norm, not the exception. Knowing what we know (and what we don't know) can help predict whether interactions will strengthen or weaken crowd wisdom.

Wednesday, May 27, 2020

Trust in Medical Scientists Has Grown in U.S.

C. Funk, B. Kennedy, & C. Johnson
Pew Research Center
Originally published 21 May 20

Americans’ confidence in medical scientists has grown since the coronavirus outbreak first began to upend life in the United States, as have perceptions that medical doctors hold very high ethical standards. And in their own estimation, most U.S. adults think the outbreak raises the importance of scientific developments.

Scientists have played a prominent role in advising government leaders and informing the public about the course of the pandemic, with doctors such as Anthony Fauci and Deborah Birx, among others, appearing at press conferences alongside President Donald Trump and other government officials.

But there are growing partisan divisions over the risk the novel coronavirus poses to public health, as well as public confidence in the scientific and medical community and the role such experts are playing in public policy.

Still, most Americans believe social distancing measures are helping at least some to slow the spread of the coronavirus disease, known as COVID-19. People see a mix of reasons behind new cases of infection, including limited testing, people not following social distancing measures and the nature of the disease itself.

These are among the key findings from a new national survey by Pew Research Center, conducted April 29 to May 5 among 10,957 U.S. adults, and a new analysis of a national survey conducted April 20 to 26 among 10,139 U.S. adults, both using the Center’s American Trends Panel.

Public confidence in medical scientists to act in the best interests of the public has gone up from 35% with a great deal of confidence before the outbreak to 43% in the Center’s April survey. Similarly, there is a modest uptick in public confidence in scientists, from 35% in 2019 to 39% today. (A random half of survey respondents rated their confidence in one of the two groups.)

The info is here.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018

Abstract

Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Saturday, July 7, 2018

Making better decisions in groups

Dan Bang, Chris D. Frith
Published 16 August 2017.
DOI: 10.1098/rsos.170193

Abstract

We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.

The article is here.

Friday, July 6, 2018

People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

Tom Stafford
Blog Post: Research Digest
Originally posted May 31, 2018

Here is an excerpt:

Finally and more promisingly, the researchers found some evidence that belief superiority can be dented by feedback. If participants were told that people with beliefs like theirs tended to score poorly on topic knowledge, or if they were directly told that their score on the topic knowledge quiz was low, this not only reduced their belief superiority, it also caused them to seek out the kind of challenging information they had previously neglected in the headlines task (though the evidence for this behavioural effect was mixed).

The studies all involved participants accessed via Amazon’s Mechanical Turk, allowing the researchers to work with large samples of Americans for each experiment. Their findings mirror the well-known Dunning-Kruger effect – Kruger and Dunning showed that for domains such as judgments of grammar, humour or logic, the most skilled tend to underestimate their ability, while the least skilled overestimate it. Hall and Raimi’s research extends this to the realm of political opinions (where objective assessment of correctness is not available), showing that the belief your opinion is better than other people’s tends to be associated with overestimation of your relevant knowledge.

The article is here.

Thursday, August 31, 2017

Stress Leads to Bad Decisions. Here’s How to Avoid Them

Ron Carucci
Harvard Business Review
Originally posted August 29, 2017

Here is an excerpt:

Facing high-risk decisions. 

For routine decisions, most leaders fall into one of two camps: The “trust your gut” leader makes highly intuitive decisions, and the “analyze everything” leader wants lots of data to back up their choice. Usually, a leader’s preference for one of these approaches poses minimal threat to the decision’s quality. But the stress caused by a high-stakes decision can provoke them to the extremes of their natural inclination. The highly intuitive leader becomes impulsive, missing critical facts. The highly analytical leader gets paralyzed in data, often failing to make any decision. The right blend of data and intuition applied to carefully constructing a choice builds the organization’s confidence for executing the decision once made. Clearly identify the risks inherent in the precedents underlying the decision and communicate that you understand them. Examine available data sets, identify any conflicting facts, and vet them with appropriate stakeholders (especially superiors) to make sure your interpretations align. Ask for input from others who’ve faced similar decisions. Then make the call.

Solving an intractable problem. 

To a stressed-out leader facing a chronic challenge, it often feels like their only options are to either (1) vehemently argue for their proposed solution with unyielding certainty, or (2) offer ideas very indirectly to avoid seeming domineering and to encourage the team to take ownership of the challenge. The problem, again, is that neither extreme works. If people feel the leader is being dogmatic, they will disengage regardless of the merits of the idea. If they feel the leader lacks confidence in the idea, they will struggle to muster conviction to try it, concluding, “Well, if the boss isn’t all that convinced it will work, I’m not going to stick my neck out.”

The article is here.