Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decision Making. Show all posts
Showing posts with label Decision Making. Show all posts

Sunday, December 24, 2023

Dual character concepts and the normative dimension of conceptual representation

Knobe, J., Prasada, S., & Newman, G. E. (2013).
Cognition, 127(2), 242–257. 


Five experiments provide evidence for a class of ‘dual character concepts.’ Dual character concepts characterize their members in terms of both (a) a set of concrete features and (b) the abstract values that these features serve to realize. As such, these concepts provide two bases for evaluating category members and two different criteria for category membership. Experiment 1 provides support for the notion that dual character concepts have two bases for evaluation. Experiments 2–4 explore the claim that dual character concepts have two different criteria for category membership. The results show that when an object possesses the appropriate concrete features, but does not fulfill the appropriate abstract value, it is judged to be a category member in one sense but not in another. Finally, Experiment 5 uses the theory developed here to construct artificial dual character concepts and examines whether participants react to these artificial concepts in the same way as naturally occurring dual character concepts. The present studies serve to define the nature of dual character concepts and distinguish them from other types of concepts (e.g., natural kind concepts), which share some, but not all of the properties of dual character concepts. More broadly, these phenomena suggest a normative dimension in everyday conceptual representation.

Here is my summary of the research, which has its current critics:

This research challenged traditional understandings of categorization and evaluation. Dual character concepts, exemplified by terms like "artist," "scientist," and "teacher," possess two distinct dimensions:

Concrete Features: These are the observable, physical attributes or characteristics that members of the category share.

Abstract Values: These are the underlying goals, ideals, or purposes that the concrete features serve to realize.

Unlike other types of concepts, dual character concepts allow for two distinct bases for evaluation:

Good/Bad Evaluation: This assessment is based on how well the concrete features of an entity align with the expected characteristics of a category member.

True/False Evaluation: This judgment is based on whether the abstract values embedded in the concept are fulfilled by the concrete features of an entity.

This dual-pronged evaluation process leads to intriguing consequences for categorization and judgment. An object may be deemed a "good" category member based on its concrete features, yet not a "true" member if it fails to uphold the abstract values associated with the concept.

The researchers provide compelling evidence for the existence of dual character concepts through a series of experiments. These studies demonstrate that people have two distinct ways of characterizing category members and that dual character concepts influence judgments of category membership.

The concept of dual character concepts highlights the normative dimension of conceptual representation, suggesting that our concepts not only reflect the world but also embody our values and beliefs. This normative dimension shapes how we categorize objects, evaluate entities, and make decisions in our daily lives.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.


Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.

My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv


To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.

The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Thursday, May 11, 2023

Reputational Rationality Theory

Dorison, C. (2023, March 29). 


Traditionally, research on human judgment and decision making draws on cognitive psychology to identify deviations from normative standards of how decisions ought to be made. These deviations are commonly considered irrational errors and biases. However, this approach has serious limitations. Critically, even though most decisions are embedded within complex social networks of observers, this approach typically ignores how decisions are perceived by valued audiences. To address this limitation, this article proposes reputational rationality theory: a theoretical model of how observers evaluate targets who do (vs. do not) strictly adhere to normative standards of judgment and choice. Drawing on the dual pathways of homophily and social signaling, the theory generates testable predictions regarding when and why observers positively evaluate error-prone decision makers, termed the benefit of bias hypothesis. Given that individuals hold deep impression management goals, reputational rationality theory challenges the unqualified classification of response tendencies that deviate from normative standards as irrational. That is, apparent errors and biases can, under certain conditions, be reputationally rational. The reputational rewards associated with cognitive biases may in turn contribute to their persistence. Acknowledging the (sometimes beneficial) reputational consequences of cognitive biases can address long-standing puzzles in judgment and decision making as well as generate fruitful avenues for future research.


Reputational rationality theory inverts this relationship. Reputational rationality theory is primarily concerned with the observer rather than the target.It thus yields novel predictions regarding how observers evaluate targets (rather than how targets shift behavior due to pressures from observers). Reputational rationality theory is inherently a social cognition model, concerned with—for example—how the public evaluates the politician or how the CEO evaluates the employee. The theory suggests that several influential errors and biases—not taking the value-maximizing risk or investing in the worthwhile venture—can serve functional goals once reputational consequences are considered.

As summarized above, prior cognitive and social approaches to judgment and decision making have traditionally omitted empirical investigation of how judgments and decisions are perceived by valued audiences—such as the public or coworkers in the examples above. How concerning is this omission? On the one hand, this omission may be tolerable—if not ignorable—if reputational incentives align with goals that are traditionally considered in this work (e.g., accuracy, optimization, adherence to logic and statistics). Simply put, researchers could safely ignore reputational consequences if such consequences already reinforce conventional wisdom and standard recommendations for what it means to make a “good” decision. If observers penalize targets who recklessly display overconfidence or who flippantly switch their risk preferences based on decision frames, then examining these reputational consequences becomes less necessary, and the omission thus less severe. On the other hand, this omission may be relatively more severe if reputational incentives regularly conflict with traditional measures or undermine standard recommendations.



The challenges currently facing society are daunting. The planet is heating at an alarming pace. A growing number of countries hold nuclear weapons capable of killing millions in mere minutes. Democratic institutions in many countries, including the United States, appear weaker than previously thought. Confronting such challenges requires global leaders and citizens alike to make sound judgments and decisions within complex environments: to effectively navigate risk under conditions of widespread uncertainty; to pivot from failing paths to new opportunities; to properly calibrate their confidence among multiple possible futures. But is human rationality up to the task?

Building on a traditional cognitive and social approaches to human judgment and decision making, reputational rationality theory casts doubt on traditional normative classifications of errors and biases based on individual-level cognition, while simultaneously generating testable predictions for future research taking a broader social/institutional perspective. By examining both the reputational causes and consequences of human judgment and decision making, researchers can gain increased understanding not only into how judgments and decisions are made, but also how behavior can be changed—for good.

Tuesday, February 28, 2023

Transformative experience and the right to revelatory autonomy

Farbod Akhlaghi
Originally Published: 31 December 2022


Sometimes it is not us but those to whom we stand in special relations that face transformative choices: our friends, family or beloved. A focus upon first-personal rational choice and agency has left crucial ethical questions regarding what we owe to those who face transformative choices largely unexplored. In this paper I ask: under what conditions, if any, is it morally permissible to interfere to try to prevent another from making a transformative choice? Some seemingly plausible answers to this question fail precisely because they concern transformative experiences. I argue that we have a distinctive moral right to revelatory autonomy grounded in the value of autonomous self-making. If this right is outweighed then, I argue, interfering to prevent another making a transformative choice is permissible. This conditional answer lays the groundwork for a promising ethics of transformative experience.


Ethical questions regarding transformative experiences are morally urgent. A complete answer to our question requires ascertaining precisely how strong the right to revelatory autonomy is and what competing considerations can outweigh it. These are questions for another time, where the moral significance of revelation and self-making, the competing weight of moral and non-moral considerations, and the sense in which some transformative choices are more significant to one’s identity and self-making than others must be further explored.

But to identify the right to revelatory autonomy and duty of revelatory non-interference is significant progress. For it provides a framework to address the ethics of transformative experience that avoids complications arising from the epistemic peculiarities of transformative experiences. It also allows us to explain cases where we are permitted to interfere in another’s transformative choice and why interference in some choices is harder to justify than others, whilst recognizing plausible grounds for the right to revelatory autonomy itself in the moral value of autonomous self-making. This framework, moreover, opens novel avenues of engagement with wider ethical issues regarding transformative experience, for example concerning social justice or surrogate transformative choice-making. It is, at the very least, a view worthy of further consideration.

This reasoning applies to psychologists in psychotherapy.  Unless significant danger is present, psychologists need to avoid intrusive advocacy, meaning pulling autonomy away from the patient.  Soft paternalism can occur in psychotherapy, when trying to avoid significant harm.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).


Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.


In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.


The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.

Saturday, October 29, 2022

Sleep loss leads to the withdrawal of human helping across individuals, groups, and large-scale societies

Ben Simon E, Vallat R, Rossi A, Walker MP (2022) 
PLoS Biol 20(8): e3001733.


Humans help each other. This fundamental feature of homo sapiens has been one of the most powerful forces sculpting the advent of modern civilizations. But what determines whether humans choose to help one another? Across 3 replicating studies, here, we demonstrate that sleep loss represents one previously unrecognized factor dictating whether humans choose to help each other, observed at 3 different scales (within individuals, across individuals, and across societies). First, at an individual level, 1 night of sleep loss triggers the withdrawal of help from one individual to another. Moreover, fMRI findings revealed that the withdrawal of human helping is associated with deactivation of key nodes within the social cognition brain network that facilitates prosociality. Second, at a group level, ecological night-to-night reductions in sleep across several nights predict corresponding next-day reductions in the choice to help others during day-to-day interactions. Third, at a large-scale national level, we demonstrate that 1 h of lost sleep opportunity, inflicted by the transition to Daylight Saving Time, reduces real-world altruistic helping through the act of donation giving, established through the analysis of over 3 million charitable donations. Therefore, inadequate sleep represents a significant influential force determining whether humans choose to help one another, observable across micro- and macroscopic levels of civilized interaction. The implications of this effect may be non-trivial when considering the essentiality of human helping in the maintenance of cooperative, civil society, combined with the reported decline in sufficient sleep in many first-world nations.

From the Discussion section

Taken together, findings across all 3 studies establish insufficient sleep (both quantity and quality) as a degrading force influencing whether or not humans wish to help each other, and do indeed, choose to help each other (through real-world altruistic acts), observable at 3 different societal scales: within individuals, across individuals, and at a nationwide level.

Study 1 established not only the causal impact of sleep loss on the basic desire to help another human being, but further characterised the central underlying brain mechanism associated with this altered phenotype of diminished helping. Specifically, sleep loss significantly and selectively reduced activity throughout key nodes of the social cognition brain network (see Fig 1B) normally associated with prosociality, including perspective taking of others’ mental state, their emotions, and their personal needs. Therefore, impairment of this neural system caused by a lack of sleep represents one novel pathway explaining the associated withdrawal of helping desire and the decisional act to offer such help.

Saturday, August 6, 2022

A General Model of Cognitive Bias in Human Judgment and Systematic Review Specific to Forensic Mental Health

Neal, T. M. S., Lienert, P., Denne, E., & 
Singh, J. P. (2022).  
Law and Human Behavior, 46(2), 99–120.


Cognitive biases can impact experts’ judgments and decisions. We offer a broad descriptive model of how bias affects human judgment. Although studies have explored the role of cognitive biases and debiasing techniques in forensic mental health, we conducted the first systematic review to identify, evaluate, and summarize the findings. Hypotheses. Given the exploratory nature of this review, we did not test formal hypotheses. General research questions included the proportion of studies focusing on cognitive biases and/or debiasing, the research methods applied, the cognitive biases and debiasing strategies empirically studied in the forensic context, their effects on forensic mental health decisions, and effect sizes.

Public Significance Statement

Evidence of bias in forensic mental health emerged in ways consistent with what we know about human judgment broadly. We know less about how to debias judgments—an important frontier for future research. Better understanding how bias works and developing effective debiasing strategies tailored to the forensic mental health context hold promise for improving quality. Until then, we can use what we know now to limit bias in our work.

From the Discussion section

Is Bias a Problem for the Field of Forensic Mental Health?

Our interpretation of the judgment and decision-making literature more broadly, as well as the results from this systematic review conducted in this specific context, is that bias is an issue that deserves attention in forensic mental health—with some nuance. The overall assertion that bias is worthy of concern in forensic mental health rests both on the broader and the more specific literatures we reference here.

The broader literature is robust, revealing that well-studied biases affect human judgment and social cognition (e.g., Gilovich et al., 2002; Kahneman, 2011; see Figure 1). Although the field is robust in terms of individual studies demonstrating cognitive biases, decision science needs a credible, scientific organization of the various types of cognitive biases that have proliferated to better situate and organize the field. Even in the apparent absence of such an organizational structure, it is clear that biases influence consequential judgments not just for laypeople but for experts too, such as pilots (e.g., Walmsley & Gilbey, 2016), intelligence analysts (e.g., Reyna et al., 2014), doctors (e.g., Drew et al., 2013), and judges and lawyers (e.g., Englich et al., 2006; Girvan et al., 2015; Rachlinski et al., 2009). Given that forensic mental health experts are human, as are these other experts who demonstrate typical biases by virtue of being human, there is no reason to believe that forensic experts have automatic special protection against bias by virtue of their expertise.

Sunday, July 10, 2022

Situational factors shape moral judgements in the trolley dilemma in Eastern, Southern and Western countries in a culturally diverse sample

Bago, B., Kovacs, M., Protzko, J. et al. 
Nat Hum Behav (2022).


The study of moral judgements often centres on moral dilemmas in which options consistent with deontological perspectives (that is, emphasizing rules, individual rights and duties) are in conflict with options consistent with utilitarian judgements (that is, following the greater good based on consequences). Greene et al. (2009) showed that psychological and situational factors (for example, the intent of the agent or the presence of physical contact between the agent and the victim) can play an important role in moral dilemma judgements (for example, the trolley problem). Our knowledge is limited concerning both the universality of these effects outside the United States and the impact of culture on the situational and psychological factors affecting moral judgements. Thus, we empirically tested the universality of the effects of intent and personal force on moral dilemma judgements by replicating the experiments of Greene et al. in 45 countries from all inhabited continents. We found that personal force and its interaction with intention exert influence on moral judgements in the US and Western cultural clusters, replicating and expanding the original findings. Moreover, the personal force effect was present in all cultural clusters, suggesting it is culturally universal. The evidence for the cultural universality of the interaction effect was inconclusive in the Eastern and Southern cultural clusters (depending on exclusion criteria). We found no strong association between collectivism/individualism and moral dilemma judgements.

From the Discussion

In this research, we replicated the design of Greene et al. using a culturally diverse sample across 45 countries to test the universality of their results. Overall, our results support the proposition that the effect of personal force on moral judgements is likely culturally universal. This finding makes it plausible that the personal force effect is influenced by basic cognitive or emotional processes that are universal for humans and independent of culture. Our findings regarding the interaction between personal force and intention were more mixed. We found strong evidence for the interaction of personal force and intention among participants coming from Western countries regardless of familiarity and dilemma context (trolley or speedboat), fully replicating the results of Greene et al.. However, the evidence was inconclusive among participants from Eastern countries in all cases. Additionally, this interaction result was mixed for participants from countries in the Southern cluster. We only found strong enough evidence when people familiar with these dilemmas were included in the sample and only for the trolley (not speedboat) dilemma.

Our general observation is that the size of the interaction was smaller on the speedboat dilemmas in every cultural cluster. It is yet unclear whether this effect is caused by some deep-seated (and unknown) differences between the two dilemmas (for example, participants experiencing smaller emotional engagement in the speedboat dilemmas that changes response patterns) or by some unintended experimental confound (for example, an effect of the order of presentation of the dilemmas).

Sunday, March 27, 2022

Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing

Dorison, C. A., & Heller, B. H. (2022). 
Journal of Experimental Psychology: 
General. Advance online publication.


A large interdisciplinary body of research on human judgment and decision making documents systematic deviations between prescriptive decision models (i.e., how individuals should behave) and descriptive decision models (i.e., how individuals actually behave). One canonical example is the loss–gain framing effect on risk preferences: the robust tendency for risk preferences to shift depending on whether outcomes are described as losses or gains. Traditionally, researchers argue that decision makers should always be immune to loss–gain framing effects. We present three preregistered experiments (N = 1,954) that qualify this prescription. We predict and find that while third-party observers penalize decision makers who make risk-averse (vs. risk-seeking) choices when choice outcomes are framed as losses, this result reverses when outcomes are framed as gains. This reversal holds across five social perceptions, three decision contexts, two sample populations of United States adults, and with financial stakes. This pattern is driven by the fact that observers themselves fall victim to framing effects and socially derogate (and financially punish) decision makers who disagree. Given that individuals often care deeply about their reputation, our results challenge the long-standing prescription that they should always be immune to framing effects. The results extend understanding not only for decision making under risk, but also for a range of behavioral tendencies long considered irrational biases. Such understanding may ultimately reveal not only why such biases are so persistent but also novel interventions: our results suggest a necessary focus on social and organizational norms.

From the General Discussion

But what makes an optimal belief or choice? Here, we argue that an expanded focus on the goals decision makers themselves hold (i.e., reputation management) questions whether such deviations from rational-agent models should always be considered suboptimal. We test this broader theorizing in the context of loss-gain framing effects on risk preferences not because we think the psychological dynamics at play are
unique to this context, but rather because such framing effects have been uniquely influential for both academic discourse and applied interventions in policy and organizations. In fact, the results hold preliminary implications not only for decision making under risk, but also for extending understanding of a range of other behavioral tendencies long considered irrational biases in the research literature on judgment and decision making (e.g., sunk cost bias; see Dorison, Umphres, & Lerner, 2021).

An important clarification of our claims merits note. We are not claiming that it is always rational to be biased just because others are. For example, it would be quite odd to claim that someone is rational for believing that eating sand provides enough nutrients to survive, simply because others may like them for holding this belief or because others in their immediate social circle hold this belief. In this admittedly bizarre case, it would still be clearly irrational to attempt to subsist on sand, even if there are reputational advantages to doing so—that is, the costs substantially outweigh the reputational benefits. In fact, the vast majority of framing effect studies in the lab do not have an explicit reputational/strategic component at all. 

Saturday, October 23, 2021

Decision fatigue: Why it’s so hard to make up your mind these days, and how to make it easier

Stacy Colino
The Washington Post
Originally posted 22 Sept 21

Here is an excerpt:

Decision fatigue is more than just a feeling; it stems in part from changes in brain function. Research using functional magnetic resonance imaging has shown that there’s a sweet spot for brain function when it comes to making choices: When people were asked to choose from sets of six, 12 or 24 items, activity was highest in the striatum and the anterior cingulate cortex — both of which coordinate various aspects of cognition, including decision-making and impulse control — when the people faced 12 choices, which was perceived as “the right amount.”

Decision fatigue may make it harder to exercise self-control when it comes to eating, drinking, exercising or shopping. “Depleted people become more passive, which becomes bad for their decision-making,” says Roy Baumeister, a professor of psychology at the University of Queensland in Australia and author of  “Willpower: Rediscovering the Greatest Human Strength.” “They can be more impulsive. They may feel emotions more strongly. And they’re more susceptible to bias and more likely to postpone decision-making.”

In laboratory studies, researchers asked people to choose from an array of consumer goods or college course options or to simply think about the same options without making choices. They found that the choice-makers later experienced reduced self-control, including less physical stamina, greater procrastination and lower performance on tasks involving math calculations; the choice-contemplators didn’t experience these depletions.

Having insufficient information about the choices at hand may influence people’s susceptibility to decision fatigue. Experiencing high levels of stress and general fatigue can, too, Bufka says. And if you believe that the choices you make say something about who you are as a person, that can ratchet up the pressure, increasing your chances of being vulnerable to decision fatigue.

The suggestions include:

1. Sleep well
2. Make some choice automatic
3. Enlist a choice advisor
4. Given expectations a reality check
5. Pace yourself
6. Pay attention to feelings

Friday, July 2, 2021

Retrieval-constrained valuation: Toward prediction of open-ended decisions

Zhihao Z., Shichun Wang, et al.
PNAS May 2021, 118 (20) e2022685118
DOI: 10.1073/pnas.2022685118


Real-world decisions are often open ended, with goals, choice options, or evaluation criteria conceived by decision-makers themselves. Critically, the quality of decisions may heavily rely on the generation of options, as failure to generate promising options limits, or even eliminates, the opportunity for choosing them. This core aspect of problem structuring, however, is largely absent from classical models of decision-making, thereby restricting their predictive scope. Here, we take a step toward addressing this issue by developing a neurally inspired cognitive model of a class of ill-structured decisions in which choice options must be self-generated. Specifically, using a model in which semantic memory retrieval is assumed to constrain the set of options available during valuation, we generate highly accurate out-of-sample predictions of choices across multiple categories of goods. Our model significantly and substantially outperforms models that only account for valuation or retrieval in isolation or those that make alternative mechanistic assumptions regarding their interaction. Furthermore, using neuroimaging, we confirm our core assumption regarding the engagement of, and interaction between, semantic memory retrieval and valuation processes. Together, these results provide a neurally grounded and mechanistic account of decisions with self-generated options, representing a step toward unraveling cognitive mechanisms underlying adaptive decision-making in the real world.


Life is not a multiple-choice test: Many real-world decisions leave goals, choice options, or evaluation criteria to be determined by decision-makers themselves. However, a mechanistic understanding of how such problem structuring processes influence choice has largely eluded standard models of decision-making. By developing a neurally grounded cognitive model that integrates semantic knowledge retrieval and valuation processes, we offer a computational framework providing strikingly accurate out-of-sample predictions of choices with self-generated options. This framework generates psychological insights into the nature and force of memory retrieval’s substantial influence on choice behavior. Together, these findings represent a step toward predicting complex, ill-structured decisions in the real world, opening up new approaches that may broaden the scope of formal models of decision-making.

Wednesday, May 26, 2021

Before You Answer, Consider the Opposite Possibility—How Productive Disagreements Lead to Better Outcomes

Ian Leslie
The Atlantic
Originally published 25 Apr 21

Here is an excerpt:

This raises the question of how a wise inner crowd can be cultivated. Psychologists have investigated various methods. One, following Stroop, is to harness the power of forgetting. Reassuringly for those of us who are prone to forgetting, people with poor working memories have been shown to have a wiser inner crowd; their guesses are more independent of one another, so they end up with a more diverse set of estimates and a more accurate average. The same effect has been achieved by spacing the guesses out in time.

More sophisticated methods harness the mind’s ability to inhabit different perspectives and look at a problem from more than one angle. People generate more diverse estimates when prompted to base their second or third guess on alternative assumptions; one effective technique is simply asking people to “consider the opposite” before giving a new answer. A fascinating recent study in this vein harnesses the power of disagreement itself. A pair of Dutch psychologists, Philippe Van de Calseyde and Emir Efendić, asked people a series of questions with numerical answers, such as the percentage of the world’s airports located in the U.S.. Then they asked participants to think of someone in their life with whom they often disagreed—that uncle with whom they always argue about politics—and to imagine what that person would guess.

The respondents came up with second estimates that were strikingly different from their first estimate, producing a much more accurate inner crowd. The same didn’t apply when they were asked to imagine how someone they usually agree with would answer the question, which suggests that the secret is to incorporate the perspectives of people who think differently from us. That the respondents hadn’t discussed that particular question with their disagreeable uncle did not matter. Just the act of thinking about someone with whom they argued a lot was enough to jog them out of habitual assumptions.

Monday, January 11, 2021

'The robot made me do it': Robots encourage risk-taking behaviour in people

Press Release
University of Southampton
Originally released 11 Dec 20

New research has shown robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practiCal and policy implications, which this study set out to explore.

Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, "We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact."

This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player's "temporary money bank". The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to "cash-in" before this happens and move on to the next balloon.

One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as "why did you stop pumping?"

The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.

Sunday, December 13, 2020

Polarization and extremism emerge from rational choice

Kvam, P. D., & Baldwin, M. 
(2020, October 21).


Polarization is often thought to be the product of biased information search, motivated reasoning, or other psychological biases. However, polarization and extremism can still occur in the absence of any bias or irrational thinking. In this paper, we show that polarization occurs among groups of decision makers who are implementing rational choice strategies that maximize decision efficiency. This occurs because extreme information enables decision makers to make up their minds and stop considering new information, whereas moderate information is unlikely to trigger a decision. Furthermore, groups of decision makers will generate extremists -- individuals who hold strong views despite being uninformed and impulsive. In re-analyses of seven previous empirical studies on both perceptual and preferential choice, we show that both polarization and extremism manifest across a wide variety of choice paradigms. We conclude by offering theoretically-motivated interventions that could reduce polarization and extremism by altering the incentives people have when gathering information.


In a decision scenario that incentivizes a trade-off between time and decision quality, a population of rational decision makers will become polarized. In this paper, we have shown this through simulations, a mathematical proof (supplementary materials) and demonstrated it empirically in seven studies.   This  leads  us  to  an  unfortunate  but  unavoidable  conclusion that decision making is a bias-inducing process by which  participants  gather  representative  information  from their environment and, through the decision rules they implement, distort it toward the extremes. Such a process also generates extremists, who hold extreme views and carry undue influence over cultural discourse (Navarro et al.,2018) despite being relatively uninformed and impulsive (low thresh-olds;Kim & Lee,2011). We have suggested several avenues for interventions, foremost among them providing incentives favoring estimation or judgments as opposed to incentives for timely decision making. Our hope is that future work testing and implementing these interventions will reduce the prevalence of polarization and extremism across social domains currently occupied by decision makers.