Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Probability. Show all posts
Showing posts with label Probability. Show all posts

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499

Abstract

A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.


My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Thursday, October 29, 2020

Probabilistic Biases Meet the Bayesian Brain.

Chater N, et al.
Current Directions in Psychological Science. 
2020;29(5):506-512. 
doi:10.1177/0963721420954801

Abstract

In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.

Introduction

Human probabilistic reasoning gets bad press. Decades of brilliant experiments, most notably by Daniel Kahneman and Amos Tversky (e.g., Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982), have shown a plethora of ways in which people get into a terrible muddle when wondering how probable things are. Every psychologist has learned about anchoring, conservatism, the representativeness heuristic, and many other ways that people reveal their probabilistic incompetence. Creating probability theory in the first place was incredibly challenging, exercising great mathematical minds over several centuries (Hacking, 1990). Probabilistic reasoning is hard, and perhaps it should not be surprising that people often do it badly. This view is the starting point for the whole field of judgment and decision-making (JDM) and its cousin, behavioral economics.

Oddly, though, human probabilistic reasoning equally often gets good press. Indeed, many psychologists, neuroscientists, and artificial-intelligence researchers believe that probabilistic reasoning is, in fact, the secret of human intelligence.

Saturday, January 20, 2018

Exploiting Risk–Reward Structures in Decision Making under Uncertainty

Christina Leuker Thorsten Pachur Ralph Hertwig Timothy Pleskac
PsyArXiv Preprints
Posted December 21, 2017

Abstract

People often have to make decisions under uncertainty — that is, in situations where the probabilities of obtaining a reward are unknown or at least difficult to ascertain. Because outside the laboratory payoffs and probabilities are often correlated, one solution to this problem might be to infer the probability from the magnitude of the potential reward. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk–reward relationships from the environment—and if so, how? (2) How do learned risk–reward relationships impact preferences in decision-making under uncertainty? Across three studies (N = 352), we found that participants learned risk–reward relationships after being exposed to choice environments with a negative, positive, or uncorrelated risk–reward relationship. They learned the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic
events (Experiment 3). In subsequent decisions under uncertainty, participants exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who learned a negative risk–reward relationship preferred the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk–reward heuristic.

From the Discussion Section:

Risks and rewards are the pillars of preference. This makes decision making under uncertainty a vexing problem as one of those pillars—the risks, or probabilities—is missing (Knight, 1921; Luce & Raiffa, 1957). People are commonly thought to deal with this problem by intuiting subjective probabilities from their knowledge and memory (Fox & Tversky, 1998; Tversky & Fox, 1995) or by estimating statistical probabilities from samples of information (Hertwig & Erev, 2009). Our results support another ecologically grounded solution, namely, that people estimate the missing probabilities from their immediate choice environments via their learned risk–reward relationships.

The research is here.

Monday, January 1, 2018

What I Was Wrong About This Year

David Leonhardt
The New York Times
Originally posted December 24, 2017

Here is an excerpt:

But I’ve come to realize that I was wrong about a major aspect of probabilities.

They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?

They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.

And when the unlikely happens, people scream: The probabilities were wrong!

Usually, they were not wrong. The screamers were wrong.

The article is here.

Friday, March 3, 2017

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Nautilus
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.

(cut)

Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.

Friday, November 18, 2016

Bayesian Brains without Probabilities

Adam N. Sanborn & Nick Chater
Trends in Cognitive Science
Published Online: October 26, 2016

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy.

The article is here.

Monday, July 18, 2016

How Language ‘Framing’ Influences Decision-Making

Observations
Association for Psychological Science
Published in 2016

The way information is presented, or “framed,” when people are confronted with a situation can influence decision-making. To study framing, people often use the “Asian Disease Problem.” In this problem, people are faced with an imaginary outbreak of an exotic disease and asked to choose how they will address the issue. When the problem is framed in terms of lives saved (or “gains”), people are given the choice of selecting:
Medicine A, where 200 out of 600 people will be saved
or
Medicine B, where there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.
When the problem is framed in terms of lives lost (or “losses”), people are given the option of selecting:
Medicine A, where 400 out of 600 people will die
or
Medicine B, where there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Although in both problems Medicine A and Medicine B lead to the same outcomes, people are more likely to choose Medicine A when the problem is presented in terms of gains and to choose Medicine B when the problem is presented in terms of losses. This difference occurs because people tend to be risk averse when the problem is presented in terms of gains, but risk tolerant when it is presented in terms of losses.

The article is here.

Friday, October 2, 2015

What Is Quantum Cognition, and How Is It Applied to Psychology?

By Jerome Busemeyer and Zheng Wang
Current Directions in Psychological Science 
June 2015 vol. 24 no. 3 163-169

Abstract

Quantum cognition is a new research program that uses mathematical principles from quantum theory as a framework to explain human cognition, including judgment and decision making, concepts, reasoning, memory, and perception. This research is not concerned with whether the brain is a quantum computer. Instead, it uses quantum theory as a fresh conceptual framework and a coherent set of formal tools for explaining puzzling empirical findings in psychology. In this introduction, we focus on two quantum principles as examples to show why quantum cognition is an appealing new theoretical direction for psychology: complementarity, which suggests that some psychological measures have to be made sequentially and that the context generated by the first measure can influence responses to the next one, producing measurement order effects, and superposition, which suggests that some psychological states cannot be defined with respect to definite values but, instead, that all possible values within the superposition have some potential for being expressed. We present evidence showing how these two principles work together to provide a coherent explanation for many divergent and puzzling phenomena in psychology.

The entire article is here.

Friday, September 25, 2015

The Effect of Probability Anchors on Moral Decision Making

By Chris Brand and Mike Oaksford

Abstract

The role of probabilistic reasoning in moral decision making has seen relatively little research, despite having potentially profound consequences for our models of moral cognition. To rectify this, two experiments were undertaken in which participants were presented with moral dilemmas with additional information designed to anchor judgements about how likely the dilemma’s outcomes were. It was found that these anchoring values significantly altered how permissible the dilemmas were found when they were presented both explicitly and implicitly. This was the case even for dilemmas typically seen as eliciting deontological judgements.  Implications of this finding for cognitive models of moral decision making are discussed.

The entire research is here.