Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Uncertainty. Show all posts
Showing posts with label Uncertainty. Show all posts

Monday, July 18, 2022

The One That Got Away: Overestimation of Forgone Alternatives as a Hidden Source of Regret

Feiler, D., & Müller-Trede, J. (2022).
Psychological Science, 33(2), 314–324.
https://doi.org/10.1177/09567976211032657

Abstract

Past research has established that observing the outcomes of forgone alternatives is an important driver of regret. In this research, we predicted and empirically corroborated a seemingly opposite result: Participants in our studies were more likely to experience regret when they did not observe a forgone outcome than when it was revealed. Our prediction drew on two theoretical observations. First, feelings of regret frequently stem from comparing a chosen option with one’s belief about what the forgone alternative would have been. Second, when there are many alternatives to choose from under uncertainty, the perceived attractiveness of the almost-chosen alternative tends to exceed its reality. In four preregistered studies (Ns = 800, 599, 150, and 197 adults), we found that participants predictably overestimated the forgone path, and this overestimation caused undue regret. We discuss the psychological implications of this hidden source of regret and reconcile the ostensible contradiction with past research.

Statement of Relevance

Reflecting on our past decisions can often make us feel regret. Previous research suggests that feelings of regret stem from comparing the outcome of our chosen path with that of the unchosen path.  We present a seemingly contradictory finding: Participants in our studies were more likely to experience regret when they did not observe the forgone outcome than when they saw it. This effect arises because when there are many paths to choose from, and uncertainty exists about how good each would be, people tend to overestimate the almost-chosen path. An idealized view of the path not taken then becomes an unfair standard of comparison for the chosen path, which inflates feelings of regret. Excessive regret has been found to be associated with depression and anxiety, and our work suggests that there may be a hidden source of undue regret—overestimation of forgone paths—that may contribute to these problems.

The ending...

Finally, is overestimating the paths we do not take causing us too much regret? Although regret can have
benefits for experiential learning, it is an inherently negative emotion and has been found to be associated with depression and excessive anxiety (Kocovski et al., 2005; Markman & Miller, 2006; Roese et al., 2009). Because the regret in our studies was driven by biased beliefs, it may be excessive—after all, better-calibrated beliefs about forgone alternatives would cause less regret. Whether calibrating beliefs about forgone alternatives could also help in alleviating regret’s harmful psychological consequences is an important question for future research.


Important implications for psychotherapy....

Saturday, May 22, 2021

A normative account of self-deception, overconfidence, and paranoia

Rossi-Goldthorpe, R., Leong, et al.
(2021, April 12).
https://doi.org/10.31234/osf.io/9fkb5

Abstract

Self-deception, paranoia, and overconfidence involve misbeliefs about self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in normative Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants’ perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The normative model suggests that spurious beliefs can have value – self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and costly wars.

Tuesday, August 25, 2020

Uncertainty about the impact of social decisions increases prosocial behaviour

Kappes, A., Nussberger, A. M., et al.
Nature human behaviour, 2(8), 573–580.
https://doi.org/10.1038/s41562-018-0372-x

Abstract

Uncertainty about how our choices will affect others infuses social life. Past research suggests uncertainty has a negative effect on prosocial behavior by enabling people to adopt self-serving narratives about their actions. We show that uncertainty does not always promote selfishness. We introduce a distinction between two types of uncertainty that have opposite effects on prosocial behavior. Previous work focused on outcome uncertainty: uncertainty about whether or not a decision will lead to a particular outcome. But as soon as people’s decisions might have negative consequences for others, there is also impact uncertainty: uncertainty about how badly others’ well-being will be impacted by the negative outcome. Consistent with past research, we found decreased prosocial behavior under outcome uncertainty. In contrast, prosocial behavior was increased under impact uncertainty in incentivized economic decisions and hypothetical decisions about infectious disease threats. Perceptions of social norms paralleled the behavioral effects. The effect of impact uncertainty on prosocial behavior did not depend on the individuation of others or the mere mention of harm, and was stronger when impact uncertainty was made more salient. Our findings offer insights into communicating uncertainty, especially in contexts where prosocial behavior is paramount, such as responding to infectious disease threats.

From the Summary

To summarize, we show that uncertainty does not always decrease prosocial behavior; instead, the type of uncertainty matters. Replicating previous findings, we found that outcome uncertainty – uncertainty about the outcomes of decisions – made people behave more selfishly. However, impact uncertainty about how an outcome will impact another person’s well-being increased prosocial behavior, in economic and health domains. Examining closer the effect of impact uncertainty on prosociality, we show that for the increase in prosociality to occur, simply mentioning negative outcomes or inducing uncertainty about aspects of the other person unrelated to the negative outcome is not sufficient to increase prosociality. Rather, it seems that uncertainty relating to the impact of negative outcomes on others is needed to increase prosociality in our studies. Finally, we show that impact uncertainty is only effective when it is salient, thereby potentially overcoming people’s reluctance to contemplating the harm they might cause.

Friday, June 19, 2020

Better Minds, Better Morals A Procedural Guide to Better Judgment

Schaefer GO, Savulescu J.
J Posthum Stud. 2017;1(1):26‐43.
doi:10.5325/jpoststud.1.1.0026

Abstract

Making more moral decisions - an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept.Rather, we offer a theory of procedural reliability: a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we're employing are sound and reliable. We identify four broad categories of judgment to be targeted - cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A pdf is here.

Friday, February 14, 2020

Judgment and Decision Making

Baruch Fischhoff and Stephen B. Broomell
Annual Review of Psychology 
2020 71:1, 331-355

Abstract

The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.

The paper can be downloaded here.

Saturday, September 1, 2018

Why Ethical People Become Unethical Negotiators

Dina Gerdeman
Forbes.com
Originally posted July 31, 2018

Here is an excerpt:

With profit and greed driving the desire to deceive, it’s not surprising that negotiators often act unethically. But it’s too simplistic to think people always enter a negotiation looking to dupe the other side.

Sometimes negotiators stretch the truth unintentionally, falling prey to what Bazerman and his colleagues call “bounded ethicality” by engaging in unethical behavior that contradicts their values without knowing it.

Why does this happen? In the heat of negotiations, “ethical fading” comes into play, and people are unable to see the ethical implications of their actions because their desire to win gets in the way. The end result is deception.

In business, with dollars at stake, many people will interpret situations in ways that naturally favor them. Take Bazerman’s former dentist, who always seemed too quick to drill. “He was overtreating my mouth, and it didn’t make sense,” he says.

In service professions, he explains, people often have conflicts of interest. For instance, a surgeon may believe that surgery is the proper course of action, but her perception is biased: She has an incentive and makes money off the decision to operate. Another surgeon might just as easily come to the conclusion that if it’s not bothering you, don’t operate. “Lawyers are affected by how long a case takes to settle,” he adds. “

The info is here.

Wednesday, July 25, 2018

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Saturday, January 20, 2018

Exploiting Risk–Reward Structures in Decision Making under Uncertainty

Christina Leuker Thorsten Pachur Ralph Hertwig Timothy Pleskac
PsyArXiv Preprints
Posted December 21, 2017

Abstract

People often have to make decisions under uncertainty — that is, in situations where the probabilities of obtaining a reward are unknown or at least difficult to ascertain. Because outside the laboratory payoffs and probabilities are often correlated, one solution to this problem might be to infer the probability from the magnitude of the potential reward. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk–reward relationships from the environment—and if so, how? (2) How do learned risk–reward relationships impact preferences in decision-making under uncertainty? Across three studies (N = 352), we found that participants learned risk–reward relationships after being exposed to choice environments with a negative, positive, or uncorrelated risk–reward relationship. They learned the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic
events (Experiment 3). In subsequent decisions under uncertainty, participants exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who learned a negative risk–reward relationship preferred the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk–reward heuristic.

From the Discussion Section:

Risks and rewards are the pillars of preference. This makes decision making under uncertainty a vexing problem as one of those pillars—the risks, or probabilities—is missing (Knight, 1921; Luce & Raiffa, 1957). People are commonly thought to deal with this problem by intuiting subjective probabilities from their knowledge and memory (Fox & Tversky, 1998; Tversky & Fox, 1995) or by estimating statistical probabilities from samples of information (Hertwig & Erev, 2009). Our results support another ecologically grounded solution, namely, that people estimate the missing probabilities from their immediate choice environments via their learned risk–reward relationships.

The research is here.

Wednesday, October 25, 2017

Cultivating Humility and Diagnostic Openness in Clinical Judgment

John R. Stone
AMA Journal of Ethics. October 2017, Volume 19, Number 10: 970-977.

Abstract
In this case, a physician rejects a patient’s concerns that tainted water is harming the patient and her community. Stereotypes and biases regarding socioeconomic class and race/ethnicity, constraining diagnostic frameworks, and fixed first impressions could skew the physician’s judgment. This paper narratively illustrates how cultivating humility could help the physician truly hear the patient’s suggestions. The discussion builds on the multifaceted concept of cultural humility as a lifelong journey that addresses not only stereotypes and biases but also power inequalities and community inequities. Insurgent multiculturalism is a complementary concept. Through epistemic humility—which includes both intellectual and emotional components—and admitting uncertainty, physicians can enhance patients’ and families’ epistemic authority and health agency.

The article is here.

Friday, February 17, 2017

Uncertainty Increases the Reliance on Affect in Decisions

Ali Faraji Rad and Michel Tuan Pham
J Consum Res (2017) ucw073.
Published: 23 January 2017

Abstract

How do psychological states of uncertainty influence the way people make decisions? We propose that such states increase the reliance on affective inputs in judgments and decisions. In accord with this proposition, results from six studies show that the priming of uncertainty (vs. certainty) consistently increases the effects of a variety of affective inputs on consumers’ judgments and decisions. Primed uncertainty is shown to amplify the effects of the pleasantness of a musical soundtrack (study 1), the attractiveness of a picture (study 2), the appeal of affective attributes (studies 3 and 4), incidental mood states (study 6), and even incidental states of disgust (study 5). Moreover, both negative and positive uncertainty increase the influence of affect in decisions (study 4). The results additionally show that the increased reliance on affective inputs under uncertainty does not necessarily come at the expense of a reliance on descriptive attribute information (studies 2 and 5), and that the increased reliance on affect under uncertainty is distinct from a general reliance on heuristic or peripheral cues (study 6). The phenomenon may be due to uncertainty threatening the self, thereby encouraging a reliance on inputs that are closer to the self and have high subjective validity.

The article is here.