Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Framing Effects. Show all posts
Showing posts with label Framing Effects. Show all posts

Sunday, March 27, 2022

Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing

Dorison, C. A., & Heller, B. H. (2022). 
Journal of Experimental Psychology: 
General. Advance online publication.

Abstract

A large interdisciplinary body of research on human judgment and decision making documents systematic deviations between prescriptive decision models (i.e., how individuals should behave) and descriptive decision models (i.e., how individuals actually behave). One canonical example is the loss–gain framing effect on risk preferences: the robust tendency for risk preferences to shift depending on whether outcomes are described as losses or gains. Traditionally, researchers argue that decision makers should always be immune to loss–gain framing effects. We present three preregistered experiments (N = 1,954) that qualify this prescription. We predict and find that while third-party observers penalize decision makers who make risk-averse (vs. risk-seeking) choices when choice outcomes are framed as losses, this result reverses when outcomes are framed as gains. This reversal holds across five social perceptions, three decision contexts, two sample populations of United States adults, and with financial stakes. This pattern is driven by the fact that observers themselves fall victim to framing effects and socially derogate (and financially punish) decision makers who disagree. Given that individuals often care deeply about their reputation, our results challenge the long-standing prescription that they should always be immune to framing effects. The results extend understanding not only for decision making under risk, but also for a range of behavioral tendencies long considered irrational biases. Such understanding may ultimately reveal not only why such biases are so persistent but also novel interventions: our results suggest a necessary focus on social and organizational norms.

From the General Discussion

But what makes an optimal belief or choice? Here, we argue that an expanded focus on the goals decision makers themselves hold (i.e., reputation management) questions whether such deviations from rational-agent models should always be considered suboptimal. We test this broader theorizing in the context of loss-gain framing effects on risk preferences not because we think the psychological dynamics at play are
unique to this context, but rather because such framing effects have been uniquely influential for both academic discourse and applied interventions in policy and organizations. In fact, the results hold preliminary implications not only for decision making under risk, but also for extending understanding of a range of other behavioral tendencies long considered irrational biases in the research literature on judgment and decision making (e.g., sunk cost bias; see Dorison, Umphres, & Lerner, 2021).

An important clarification of our claims merits note. We are not claiming that it is always rational to be biased just because others are. For example, it would be quite odd to claim that someone is rational for believing that eating sand provides enough nutrients to survive, simply because others may like them for holding this belief or because others in their immediate social circle hold this belief. In this admittedly bizarre case, it would still be clearly irrational to attempt to subsist on sand, even if there are reputational advantages to doing so—that is, the costs substantially outweigh the reputational benefits. In fact, the vast majority of framing effect studies in the lab do not have an explicit reputational/strategic component at all. 

Tuesday, July 13, 2021

Valence framing effects on moral judgments: A meta-analysis

McDonald, K., et al.
Cognition
Volume 212, July 2021, 104703

Abstract

Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost). When such framing effects occur in the domain of moral judgments, they have been taken to cast doubt on the reliability of moral judgments and raise questions about the extent to which these moral judgments are self-evident or justified in themselves. One important factor in this debate is the magnitude and variability of the extent to which differences in framing presentation impact moral judgments. Although moral framing effects have been studied by psychologists, the overall strength of these effects pooled across published studies is not yet known. Here we conducted a meta-analysis of 109 published articles (contributing a total of 146 unique experiments with 49,564 participants) involving valence framing effects on moral judgments and found a moderate effect (d = 0.50) among between-subjects designs as well as several moderator variables. While we find evidence for publication bias, statistically accounting for publication bias attenuates, but does not eliminate, this effect (d = 0.22). This suggests that the magnitude of valence framing effects on moral decisions is small, yet significant when accounting for publication bias.

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 

Abstract

According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.

Conclusion

We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Thursday, December 13, 2018

Does deciding among morally relevant options feel like making a choice? How morality constrains people’s sense of choice

Kouchaki, M., Smith, I. H., & Savani, K. (2018).
Journal of Personality and Social Psychology, 115(5), 788-804.
http://dx.doi.org/10.1037/pspa0000128

Abstract

We demonstrate that a difference exists between objectively having and psychologically perceiving multiple-choice options of a given decision, showing that morality serves as a constraint on people’s perceptions of choice. Across 8 studies (N = 2,217), using both experimental and correlational methods, we find that people deciding among options they view as moral in nature experience a lower sense of choice than people deciding among the same options but who do not view them as morally relevant. Moreover, this lower sense of choice is evident in people’s attentional patterns. When deciding among morally relevant options displayed on a computer screen, people devote less visual attention to the option that they ultimately reject, suggesting that when they perceive that there is a morally correct option, they are less likely to even consider immoral options as viable alternatives in their decision-making process. Furthermore, we find that experiencing a lower sense of choice because of moral considerations can have downstream behavioral consequences: after deciding among moral (but not nonmoral) options, people (in Western cultures) tend to choose more variety in an unrelated task, likely because choosing more variety helps them reassert their sense of choice. Taken together, our findings suggest that morality is an important factor that constrains people’s perceptions of choice, creating a disjunction between objectively having a choice and subjectively perceiving that one has a choice.

A pdf can be found here.

A choice may not feel like a choice when morality is at play

Susan Kelley
Cornell Chronicle
Originally posted November 15, 2018

Here is an excerpt:

People who viewed the issues as moral – regardless of which side of the debate they stood on – felt less of a sense of choice when faced with the decisions. “In contrast, people who made a decision that was not imbued with morality were more likely to view it as a choice,” Smith said.

The researchers saw this weaker sense of choice play out in the participants’ attention patterns. When deciding among morally relevant options displayed on a computer screen, they devoted less visual attention to the option that they ultimately rejected, suggesting they were less likely to even consider immoral options as viable alternatives in their decision-making, the study said.

Moreover, participants who felt they had fewer options tended to choose more variety later on. After deciding among moral options, the participants tended to opt for more variety when given the choice of seven different types of chocolate in an unrelated task. “It’s a very subtle effect but it’s indicative that people are trying to reassert their sense of autonomy,” Smith said.

Understanding the way that people make morally relevant decisions has implications for business ethics, he said: “If we can figure out what influences people to behave ethically or not, we can better empower managers with tools that might help them reduce unethical behavior in the workplace.”

The info is here.

The original research is here.

Wednesday, August 15, 2018

Thinking about Karma and God reduces believers’ selfishness in anonymous dictator games

Cindel White John Kelly Azim Shariff Ara Norenzayan
Preprint
Originally posted on June 23, 2018

Abstract

In a novel supernatural framing paradigm, three repeated-measures experiments (N = 2347) examined whether thinking about Karma and God increases generosity in anonymous dictator games. We found that (1) thinking about Karma increased generosity in karmic believers across religious affiliations, including Hindus, Buddhists, Christians, and non-religious Americans; (2) thinking about God also increased generosity among believers in God (but not among non-believers), replicating previous findings; and (3) thinking about both Karma and God shifted participants’ initially selfish offers towards fairness, but had no effect on already fair offers. Contrary to hypotheses, ratings of supernatural punitiveness did not predict greater generosity. These supernatural framing effects were obtained and replicated in high-powered, pre-registered experiments and remained robust to several methodological checks, including hypothesis guessing, game familiarity, demographic variables, and variation in data exclusion criteria.

Sunday, April 1, 2018

Sudden-Death Aversion: Avoiding Superior Options Because They Feel Riskier

Jesse Walker, Jane L. Risen, Thomas Gilovich, and Richard Thaler
Journal of Personality and Social Psychology, in press

Abstract

We present evidence of Sudden-Death Aversion (SDA) – the tendency to avoid “fast” strategies that provide a greater chance of success, but include the possibility of immediate defeat, in favor of “slow” strategies that reduce the possibility of losing quickly, but have lower odds of ultimate success. Using a combination of archival analyses and controlled experiments, we explore the psychology behind SDA. First, we provide evidence for SDA and its cost to decision makers by tabulating how often NFL teams send games into overtime by kicking an extra point rather than going for the 2-point conversion (Study 1) and how often NBA teams attempt potentially game-tying 2-point shots rather than potentially game-winning 3-pointers (Study 2). To confirm that SDA is not limited to sports, we demonstrate SDA in a military scenario (Study 3). We then explore two mechanisms that contribute to SDA: myopic loss aversion and concerns about “tempting fate.” Studies 4 and 5 show that SDA is due, in part, to myopic loss aversion, such that decision makers narrow the decision frame, paying attention to the prospect of immediate loss with the “fast” strategy, but not the downstream consequences of the “slow” strategy. Study 6 finds people are more pessimistic about a risky strategy that needn’t be pursued (opting for sudden death) than the same strategy that must be pursued. We end by discussing how these twin mechanisms lead to differential expectations of blame from the self and others, and how SDA influences decisions in several different walks of life.

The research is here.

Saturday, June 25, 2016

The Triggers We Don't Notice

By Lisa Ordóñez & David Welsh
Notre Dame Center for Ethical Leadership
Posted in 2016

Many companies’ ethics trainings focus on building frameworks and decision trees as tools for their employees to use in making ethically sound decisions. The assumption is that when these employees are confronted with morally ambiguous situations, the tools will allow them to reason their way through them and figure out the best option.

Based on innovative behavioral research, we now know that it’s not that simple. There are a lot of factors that go into determining whether a decision is ethical or unethical. People need to have the energy and resources to resist the temptation to be immoral. They need to feel like the choice matters and that their behavior will actually make a difference. Perhaps most importantly, people need to frame the situation as an ethical question. It’s not just about the tools to make the right decision when you know it’s a hard one. Employees need to flip on their “ethical switch” if they are going to recognize that there is an ethical question at hand.

The article is here.

Thursday, May 21, 2015

Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection

By Eric Schwitzgebel and Fiery Cushman
In press

Abstract

We examined the effects of framing and order of presentation on professional philosophers’
judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky &
Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing
effects and order effects, and were no less subject to such effects than was a comparison group of
non-philosopher academic participants. Framing and order effects were not reduced by a forced
delay during which participants were encouraged to consider “different variants of the scenario
or different ways of describing the case”. Nor were framing and order effects lower among
participants reporting familiarity with the trolley problem or with loss-aversion framing effects,
nor among those reporting having had a stable opinion on the issues before participating the
experiment, nor among those reporting expertise on the very issues in question. Thus, for these
scenario types, neither framing effects nor order effects appear to be reduced even by high levels
of academic expertise.

The entire article is here.