Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Optimism. Show all posts
Showing posts with label Optimism. Show all posts

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).
https://doi.org/10.1177/01461672231191360

Abstract

In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.


Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.

Sunday, February 7, 2021

How people decide what they want to know

Sharot, T., Sunstein, C.R. 
Nat Hum Behav 4, 14–19 (2020). 

Abstract

Immense amounts of information are now accessible to people, including information that bears on their past, present and future. An important research challenge is to determine how people decide to seek or avoid information. Here we propose a framework of information-seeking that aims to integrate the diverse motives that drive information-seeking and its avoidance. Our framework rests on the idea that information can alter people’s action, affect and cognition in both positive and negative ways. The suggestion is that people assess these influences and integrate them into a calculation of the value of information that leads to information-seeking or avoidance. The theory offers a framework for characterizing and quantifying individual differences in information-seeking, which we hypothesize may also be diagnostic of mental health. We consider biases that can lead to both insufficient and excessive information-seeking. We also discuss how the framework can help government agencies to assess the welfare effects of mandatory information disclosure.

Conclusion

It is increasingly possible for people to obtain information that bears on their future prospects, in terms of health, finance and even romance. It is also increasingly possible for them to obtain information about the past, the present and the future, whether or not that information bears on their personal lives. In principle, people’s decisions about whether to seek or avoid information should depend on some integration of instrumental value, hedonic value and cognitive value. But various biases can lead to both insufficient and excessive information-seeking. Individual differences in information-seeking may reflect different levels of susceptibility to those biases, as well as varying emphasis on instrumental, hedonic and cognitive utility.  Such differences may also be diagnostic of mental health.

Whether positive or negative, the value of information bears directly on significant decisions of government agencies, which are often charged with calculating the welfare effects of mandatory disclosure and which have long struggled with that task. Our hope is that the integrative framework of information-seeking motives offered here will facilitate these goals and promote future research in this important domain.

Tuesday, December 1, 2020

Using Machine Learning to Generate Novel Hypotheses: Increasing Optimism About COVID-19 Makes People Less Willing to Justify Unethical Behaviors

Sheetal A, Feng Z, Savani K. 
Psychological Science. 2020;31(10):
1222-1235. 
doi:10.1177/0956797620959594

Abstract

How can we nudge people to not engage in unethical behaviors, such as hoarding and violating social-distancing guidelines, during the COVID-19 pandemic? Because past research on antecedents of unethical behavior has not provided a clear answer, we turned to machine learning to generate novel hypotheses. We trained a deep-learning model to predict whether or not World Values Survey respondents perceived unethical behaviors as justifiable, on the basis of their responses to 708 other items. The model identified optimism about the future of humanity as one of the top predictors of unethicality. A preregistered correlational study (N = 218 U.S. residents) conceptually replicated this finding. A preregistered experiment (N = 294 U.S. residents) provided causal support: Participants who read a scenario conveying optimism about the COVID-19 pandemic were less willing to justify hoarding and violating social-distancing guidelines than participants who read a scenario conveying pessimism. The findings suggest that optimism can help reduce unethicality, and they document the utility of machine-learning methods for generating novel hypotheses.

Here is how the research article begins:

Unethical behaviors can have substantial consequences in times of crisis. For example, in the midst of the COVID-19 pandemic, many people hoarded face masks and hand sanitizers; this hoarding deprived those who needed protective supplies most (e.g., medical workers and the elderly) and, therefore, put them at risk. Despite escalating deaths, more than 50,000 people were caught violating quarantine orders in Italy, putting themselves and others at risk. Governments covered up the scale of the pandemic in that country, thereby allowing the infection to spread in an uncontrolled manner. Thus, understanding antecedents of unethical behavior and identifying nudges to reduce unethical behaviors are particularly important in times of crisis.

Here is part of the Discussion

We formulated a novel hypothesis—that optimism reduces unethicality—on the basis of the deep-learning model’s finding that whether people think that the future of humanity is bleak or bright is a strong predictor of unethicality. This variable was not flagged as a top predictor either by the correlational analysis or by the lasso regression. Consistent with this idea, the results of a correlational study showed that people higher on dispositional optimism were less willing to engage in unethical behaviors. A following experiment found that increasing participants’ optimism about the COVID-19 epidemic reduced the extent to which they justified unethical behaviors related to the epidemic. The behavioral studies were conducted with U.S. American participants; thus, the cultural generalizability of the present findings is unclear. Future research needs to test whether optimism reduces unethical behavior in other cultural contexts.

Wednesday, August 26, 2020

Morality justifies motivated reasoning in the folk ethics of belief

Cusimano, C., & Lombrozo, T. (2020, July 20).
https://doi.org/10.31234/osf.io/7r5yb

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one’s friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral evaluations affect how people prescribe beliefs to others. First, the moral value of a belief affects the evidential threshold required to believe, such that morally good beliefs demand less evidence than morally bad beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and so sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the Discussion

Additionally, participants reported that moral concerns affected the standards of evidence that apply to belief, such that morally-desirable beliefs require less evidence than morally-undesirable beliefs. In Study 1, participants reported that, relative to an impartial observer with the same information, someone with a moral reason to be optimistic had a wider range of beliefs that could be considered“consistent with” and “based on” the evidence.  Critically however, the broader range of beliefs that were consistent with the same evidence were only beliefs that were more morally desirable; morally undesirable beliefs were not more consistent with the evidence. In Studies 2 and 3, participants agreed more strongly that someone who had a moral reason to adopt a desirable belief had sufficient evidence to do so compared to someone who lacked a moral reason, even though they formed the same belief on the basis of the same evidence.  Likewise, on average, participants judged that when someone adopted the morally undesirable belief, they were more often judged as having insufficient evidence for doing so relative to someone who lacked a moral reason (again, even though they formed the same belief on the basis of the same evidence).  Finally, in Study 2 (though not in Study 3), these judgments replicated using an indirect measure of evidentiary quality; namely, attributions of knowledge. In sum, these findings document that one reason people may prescribe a motivated belief to someone is because morality changes how much evidence they consider to be required to hold the belief in an evidentially-sound way.

Editor's Note: Huge implications for psychotherapy.

Thursday, July 16, 2020

Cognitive Bias and Public Health Policy During the COVID-19 Pandemic

Halpern SD, Truog RD, and Miller FG.
JAMA. 
Published online June 29, 2020.
doi:10.1001/jama.2020.11623

Here is an excerpt:

These cognitive errors, which distract leaders from optimal policy making and citizens from taking steps to promote their own and others’ interests, cannot merely be ascribed to repudiations of science. Rather, these biases are pervasive and may have been evolutionarily selected. Even at academic medical centers, where a premium is placed on having science guide policy, COVID-19 action plans prioritized expanding critical care capacity at the outset, and many clinicians treated seriously ill patients with drugs with little evidence of effectiveness, often before these institutions and clinicians enacted strategies to prevent spread of disease.

Identifiable Lives and Optimism Bias

The first error that thwarts effective policy making during crises stems from what economists have called the “identifiable victim effect.” Humans respond more aggressively to threats to identifiable lives, ie, those that an individual can easily imagine being their own or belonging to people they care about (such as family members) or care for (such as a clinician’s patients) than to the hidden, “statistical” deaths reported in accounts of the population-level tolls of the crisis. Similarly, psychologists have described efforts to rescue endangered lives as an inviolable goal, such that immediate efforts to save visible lives cannot be abandoned even if more lives would be saved through alternative responses.

Some may view the focus on saving immediately threatened lives as rational because doing so entails less uncertainty than policies designed to save invisible lives that are not yet imminently threatened. Individuals who harbor such instincts may feel vindicated knowing that during the present pandemic, few if any patients in the US who could have benefited from a ventilator were denied one.

Yet such views represent a second reason for the broad endorsement of policies that prioritize saving visible, immediately jeopardized lives: that humans are imbued with a strong and neurally mediated3 tendency to predict outcomes that are systematically more optimistic than observed outcomes. Early pandemic prediction models provided best-case, worst-case, and most-likely estimates, fully depicting the intrinsic uncertainty.4 Sound policy would have attempted to minimize mortality by doing everything possible to prevent the worst case, but human optimism bias led many to act as if the best case was in fact the most likely.

The info is here.

Sunday, February 16, 2020

Fast optimism, slow realism? Causal evidence for a two-step model of future thinking

Hallgeir Sjåstad and Roy F. Baumeister
PsyArXiv
Originally posted 6 Jan 20

Abstract

Future optimism is a widespread phenomenon, often attributed to the psychology of intuition. However, causal evidence for this explanation is lacking, and sometimes cautious realism is found. One resolution is that thoughts about the future have two steps: A first step imagining the desired outcome, and then a sobering reflection on how to get there. Four pre-registered experiments supported this two-step model, showing that fast predictions are more optimistic than slow predictions. The total sample consisted of 2,116 participants from USA and Norway, providing 9,036 predictions. In Study 1, participants in the fast-response condition thought positive events were more likely to happen and that negative events were less likely, as compared to participants in the slow-response condition. Although the predictions were optimistically biased in both conditions, future optimism was significantly stronger among fast responders. Participants in the fast-response condition also relied more on intuitive heuristics (CRT). Studies 2 and 3 focused on future health problems (e.g., getting a heart attack or diabetes), in which participants in the fast-response condition thought they were at lower risk. Study 4 provided a direct replication, with the additional finding that fast predictions were more optimistic only for the self (vs. the average person). The results suggest that when people think about their personal future, the first response is optimistic, which only later may be followed by a second step of reflective realism. Current health, income, trait optimism, perceived control and happiness were negatively correlated with health-risk predictions, but did not moderate the fast-optimism effect.

From the Discussion section:

Four studies found that people made more optimistic predictions when they relied on fast intuition rather than slow reflection. Apparently, a delay of 15 seconds is sufficient to enable second thoughts and a drop in future optimism. The slower responses were still "unrealistically optimistic"(Weinstein, 1980; Shepperd et al., 2013), but to a much lesser extent than the fast responses. We found this fast-optimism effect on relative comparison to the average person and isolated judgments of one's own likelihood, in two different languages across two different countries, and in one direct replication.All four experiments were pre-registered, and the total sample consisted of about 2,000 participants making more than 9,000 predictions.

Thursday, March 1, 2018

Concern for Others Leads to Vicarious Optimism

Andreas Kappes, Nadira S. Faber, Guy Kahane, Julian Savulescu, Molly J. Crockett
Psychological Science 
First Published January 30, 2018

Abstract

An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

From the Discussion section

Optimism is a self-centered phenomenon in which people underestimate the likelihood of negative future events for themselves compared with others (Weinstein, 1980). Usually, the “other” is defined as a group of average others—an anonymous mass. When past studies asked participants to estimate the likelihood of an event happening to either themselves or the average population, participants did not show a learning bias for the average population (Garrett & Sharot, 2014). These findings are unsurprising given that people typically feel little concern for anonymous groups or anonymous individual strangers (Kogut & Ritov, 2005; Loewenstein et al., 2005). Yet people do care about identifiable others, and we accordingly found that people exhibit an optimistic learning bias for identifiable strangers and, even more markedly, for friends. Our research thereby suggests that optimism in learning is not restricted to oneself. We see not only our own lives through rose-tinted glasses but also the lives of those we care about.

The research is here.

Saturday, December 13, 2014

If Everything Is Getting Better, Why Do We Remain So Pessimistic?

By the Cato Institute

Featuring Steven Pinker, Johnstone Family Professor of Psychology, Harvard University; with comments by Brink Lindsey, Vice President for Research, Cato Institute; and Charles Kenny, Senior Fellow, Center for Global Development

Originally posted November 19, 2014

Evidence from academic institutions and international organizations shows dramatic improvements in human well-being. These improvements are especially striking in the developing world. Unfortunately, there is often a wide gap between reality and public perceptions, including that of many policymakers, scholars in unrelated fields, and intelligent lay persons. To make matters worse, the media emphasizes bad news, while ignoring many positive long-term trends. Please join us for a discussion of psychological, physiological, cultural, and other social reasons for the persistence of pessimism in the age of growing abundance.

The video and audio can be seen or downloaded here.

Editor's note: This video is important to psychologists to show cultural trends and beliefs that may be perpetrated by media hype.  This panel also highlights cognitive distortions, well being, and positive macro trends.  If you can, watch the first presenter, Dr. Steven Pinker.  If nothing else, you may feel a little better after watching the video.