Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Judgment. Show all posts
Showing posts with label Judgment. Show all posts

Saturday, February 3, 2024

How to Navigate the Pitfalls of AI Hype in Health Care

Suran M, Hswen Y.
Published online January 03, 2024.

What is AI snake oil, and how might it hinder progress within the medical field? What are the inherent risks in AI-driven automation for patient care, and how can we ensure the protection of sensitive patient information while maximizing its benefits?

When it comes to using AI in medicine, progress is important—but so is proceeding with caution, says Arvind Narayanan, PhD, a professor of computer science at Princeton University, where he directs the Center for Information Technology Policy.

Here is my summary:
  • AI has the potential to revolutionize healthcare, but it is important to be aware of the hype and potential pitfalls.
  • One of the biggest concerns is bias. AI algorithms can be biased based on the data they are trained on, which can lead to unfair or inaccurate results. For example, an AI algorithm that is trained on data from mostly white patients may be less accurate at diagnosing diseases in black patients.
  • Another concern is privacy. AI algorithms require large amounts of data to work, and this data can be very sensitive. It is important to make sure that patient data is protected and that patients have control over how their data is used.
  • It is also important to remember that AI is not a magic bullet. AI can be a valuable tool, but it is not a replacement for human judgment. Doctors and other healthcare professionals need to be able to critically evaluate the output of AI algorithms and make sure that it is being used in a safe and ethical way.
Overall, the interview is a cautionary tale about the potential dangers of AI in healthcare. It is important to be aware of the risks and to take steps to mitigate them. But it is also important to remember that AI has the potential to do a lot of good in healthcare. If we develop and use AI responsibly, it can help us to improve the quality of care for everyone.

Here are some additional points that were made in the interview:
  • AI can be used to help with a variety of tasks in healthcare, such as diagnosing diseases, developing new treatments, and managing chronic conditions.
  • There are a number of different types of AI, each with its own strengths and weaknesses.
  • It is important to choose the right type of AI for the task at hand.
  • AI should always be used in conjunction with human judgment.

Monday, October 30, 2023

The Mental Health Crisis Among Doctors Is a Problem for Patients

Keren Landman
Originally posted 25 OCT 23

Here is an excerpt:

What’s causing such high levels of mental distress among doctors?

Physicians have high rates of mental distress — and they’re only getting higher. One 2023 survey found six out of 10 doctors often had feelings of burnout, compared to four out of 10 pre-pandemic. In a separate 2023 study, nearly a quarter of doctors said they were depressed.

Physicians die by suicide at rates higher than the general population, with women’s risk twice as high as men’s. In a 2022 survey, one in 10 doctors said they’d thought about or attempted suicide.

Not all doctors are at equal risk: Primary care providers — like emergency medicine, internal medicine, and pediatrics practitioners — are most likely to say they’re burned out, and female physicians experience burnout at higher rates than male physicians.

(It’s worth noting that other health care professionals — perhaps most prominently nurses — also face high levels of mental distress. But because nurses are more frequently unionized than doctors and because their professional culture isn’t the same as doctor culture, the causes and solutions are also somewhat different.)

Here is my summary:

The article discusses the mental health crisis among doctors and its implications for patients. It notes that doctors are at a higher risk of suicide than any other profession, and that they also experience high rates of burnout and depression.

The mental health crisis among doctors is a problem for patients because it can lead to impaired judgment, medical errors, and reduced quality of care. Additionally, the stigma associated with mental illness can prevent doctors from seeking the help they need, which can further exacerbate the problem.

The article concludes by calling for more attention to the mental health of doctors and for more resources to be made available to help them.

I treat a number of physicians in my practice.

Thursday, July 6, 2023

An empirical perspective on moral expertise: Evidence from a global study of philosophers

Niv, Y., & Sulitzeanu‐Kenan, R. (2022).
Bioethics, 36(9), 926–935.


Considerable attention in bioethics has been devoted to moral expertise and its implications for handling applied moral problems. The existence and nature of moral expertise has been a contested topic, and particularly, whether philosophers are moral experts. In this study, we put the question of philosophers’ moral expertise in a wider context, utilizing a novel and global study among 4,087 philosophers from 96 countries. We find that despite the skepticism in recent literature, the vast majority of philosophers do believe in moral expertise and in the contribution of philosophical training and experience to its acquisition. Yet, they still differ on what philosophers’ moral expertise consists of. While they widely accept that philosophers possess superior analytic abilities regarding moral matters, they diverge on whether they also possess improved ability to judge moral problems. Nonetheless, most philosophers in our sample believe that philosophers possess an improved ability to both analyze and judge moral problems and that they commonly see these two capacities as going hand in hand. We also point at significant associations between personal and professional attributes and philosophers’ beliefs, such as age, working in the field of moral philosophy, public involvement, and association with the analytic tradition. We discuss the implications of these findings for the debate about moral expertise.

From the Discussion section

First, the distribution of philosophers’ beliefs regarding moral expertise highlights that despite the recent skepticism regarding philosophers’ moral expertise, as expressed in the bioethical literature, the vast majority of philosophers do believe in moral expertise and in the contribution of philosophical training and experience to its acquisition. The view, which holds that philosophers are not moral experts, that is, lack an advantage in both moral analysis and judgment capacities, is held by a relatively small minority (estimated at 10.7%). Yet, the findings suggest that philosophers still differ regarding what their moral expertise consists of and highlight that the crux of the debate is not whether philosophers are better moral analyzers, as a near consensus of 88.33% exists that they are. Rather opinions diverge over whether philosophers are also better moral judgers. We estimated that 38.88% of respondents believe that philosophers are only narrow moral experts while 49.45% of them believe that they are broad moral experts.

These findings can primarily be of great sociological interest. They map the views of a global sample of philosophers regarding the ancient question of the merit of philosophy, reflect what philosophers think their profession enables them to do, and consequently, what they might contribute to society. As our findings indicate, for its practitioners, philosophy is not merely an abstract reflection, but also an endeavor that facilitates moral capabilities that can be of use to handle the moral problems we confront in our daily lives.

Furthermore, we may carefully consider the possibility that the distribution of philosophers’ beliefs can also play an evidential role in the dispute about moral expertise. On the one hand, philosophers, more than others, may be best suited to accurately evaluate the merits and limitations of their capabilities. They have gained extensive experience in reflecting on philosophical matters, thus they might better understand what philosophical inquiry requires and how well they have successfully handled such tasks in the past. Their beliefs might express collective wisdom that indicates what the right answers are. As an illustration, the fact that many physicians, with years of experience in medicine, similarly trust their ability to effectively diagnose and treat illness, gives us good reasons to believe that they are. We have good reasons to believe that physicians will better know their merits and limitations. Therefore, the finding that the majority of philosophers believe that their training and experience grant them better ability to both analyze and judge moral problems offers some evidence in favor of this view.

Thursday, May 11, 2023

Reputational Rationality Theory

Dorison, C. (2023, March 29). 


Traditionally, research on human judgment and decision making draws on cognitive psychology to identify deviations from normative standards of how decisions ought to be made. These deviations are commonly considered irrational errors and biases. However, this approach has serious limitations. Critically, even though most decisions are embedded within complex social networks of observers, this approach typically ignores how decisions are perceived by valued audiences. To address this limitation, this article proposes reputational rationality theory: a theoretical model of how observers evaluate targets who do (vs. do not) strictly adhere to normative standards of judgment and choice. Drawing on the dual pathways of homophily and social signaling, the theory generates testable predictions regarding when and why observers positively evaluate error-prone decision makers, termed the benefit of bias hypothesis. Given that individuals hold deep impression management goals, reputational rationality theory challenges the unqualified classification of response tendencies that deviate from normative standards as irrational. That is, apparent errors and biases can, under certain conditions, be reputationally rational. The reputational rewards associated with cognitive biases may in turn contribute to their persistence. Acknowledging the (sometimes beneficial) reputational consequences of cognitive biases can address long-standing puzzles in judgment and decision making as well as generate fruitful avenues for future research.


Reputational rationality theory inverts this relationship. Reputational rationality theory is primarily concerned with the observer rather than the target.It thus yields novel predictions regarding how observers evaluate targets (rather than how targets shift behavior due to pressures from observers). Reputational rationality theory is inherently a social cognition model, concerned with—for example—how the public evaluates the politician or how the CEO evaluates the employee. The theory suggests that several influential errors and biases—not taking the value-maximizing risk or investing in the worthwhile venture—can serve functional goals once reputational consequences are considered.

As summarized above, prior cognitive and social approaches to judgment and decision making have traditionally omitted empirical investigation of how judgments and decisions are perceived by valued audiences—such as the public or coworkers in the examples above. How concerning is this omission? On the one hand, this omission may be tolerable—if not ignorable—if reputational incentives align with goals that are traditionally considered in this work (e.g., accuracy, optimization, adherence to logic and statistics). Simply put, researchers could safely ignore reputational consequences if such consequences already reinforce conventional wisdom and standard recommendations for what it means to make a “good” decision. If observers penalize targets who recklessly display overconfidence or who flippantly switch their risk preferences based on decision frames, then examining these reputational consequences becomes less necessary, and the omission thus less severe. On the other hand, this omission may be relatively more severe if reputational incentives regularly conflict with traditional measures or undermine standard recommendations.



The challenges currently facing society are daunting. The planet is heating at an alarming pace. A growing number of countries hold nuclear weapons capable of killing millions in mere minutes. Democratic institutions in many countries, including the United States, appear weaker than previously thought. Confronting such challenges requires global leaders and citizens alike to make sound judgments and decisions within complex environments: to effectively navigate risk under conditions of widespread uncertainty; to pivot from failing paths to new opportunities; to properly calibrate their confidence among multiple possible futures. But is human rationality up to the task?

Building on a traditional cognitive and social approaches to human judgment and decision making, reputational rationality theory casts doubt on traditional normative classifications of errors and biases based on individual-level cognition, while simultaneously generating testable predictions for future research taking a broader social/institutional perspective. By examining both the reputational causes and consequences of human judgment and decision making, researchers can gain increased understanding not only into how judgments and decisions are made, but also how behavior can be changed—for good.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).


Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.


In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Saturday, November 12, 2022

Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information

Litovsky, Y. Loewenstein, G. et al.
PNAS, Vol. 119 | No. 34
August 23, 2022


We often talk about interacting with information as we would with a physical good (e.g., “consuming content”) and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., “holding on to” or “letting go of” our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory—loss aversion and different risk preferences for gains versus losses—also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of “owning” it. Study 3 provides a conceptual replication of the classic “Asian Disease” gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.


We build on Abelson and Prentice’s conjecture that beliefs are not merely valued as guides to interacting with the world, but as cherished possessions. Extending this idea to information, we show that three key phenomena which characterize the valuation of money and material goods—loss aversion, the endowment effect, and the gain-loss framing effect—also apply to noninstrumental information. We discuss, more generally, how the analogy between noninstrumental information and material goods can help make sense of the complex ways in which people deal with the huge expansion of available information in the digital age.

From the Discussion

Economists have traditionally treated the value of information as derivative of its consequences for decision-making. While prior research on noninstrumental information has shown that this narrow view of information may be incomplete, only a few accounts have attempted to explain intrinsic preferences for information. One such account argues that people seek (or avoid) information inasmuch as doing so helps them maintain their cherished beliefs. Another proposes that people choose which information to seek or avoid by considering how it will impact their actions, affect, and cognition. Yet, outside of the curiosity literature, no existing account of information valuation considers preferences for information that has neither instrumental nor (concrete) hedonic value. By showing that key features of Prospect Theory’s value function also apply to individuals’ valuation of (even noninstrumental) information, the current paper suggests that we may also value information in some of the same fundamental ways that we value physical goods.

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Volume 223, June 2022, 105021


Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Sunday, March 27, 2022

Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing

Dorison, C. A., & Heller, B. H. (2022). 
Journal of Experimental Psychology: 
General. Advance online publication.


A large interdisciplinary body of research on human judgment and decision making documents systematic deviations between prescriptive decision models (i.e., how individuals should behave) and descriptive decision models (i.e., how individuals actually behave). One canonical example is the loss–gain framing effect on risk preferences: the robust tendency for risk preferences to shift depending on whether outcomes are described as losses or gains. Traditionally, researchers argue that decision makers should always be immune to loss–gain framing effects. We present three preregistered experiments (N = 1,954) that qualify this prescription. We predict and find that while third-party observers penalize decision makers who make risk-averse (vs. risk-seeking) choices when choice outcomes are framed as losses, this result reverses when outcomes are framed as gains. This reversal holds across five social perceptions, three decision contexts, two sample populations of United States adults, and with financial stakes. This pattern is driven by the fact that observers themselves fall victim to framing effects and socially derogate (and financially punish) decision makers who disagree. Given that individuals often care deeply about their reputation, our results challenge the long-standing prescription that they should always be immune to framing effects. The results extend understanding not only for decision making under risk, but also for a range of behavioral tendencies long considered irrational biases. Such understanding may ultimately reveal not only why such biases are so persistent but also novel interventions: our results suggest a necessary focus on social and organizational norms.

From the General Discussion

But what makes an optimal belief or choice? Here, we argue that an expanded focus on the goals decision makers themselves hold (i.e., reputation management) questions whether such deviations from rational-agent models should always be considered suboptimal. We test this broader theorizing in the context of loss-gain framing effects on risk preferences not because we think the psychological dynamics at play are
unique to this context, but rather because such framing effects have been uniquely influential for both academic discourse and applied interventions in policy and organizations. In fact, the results hold preliminary implications not only for decision making under risk, but also for extending understanding of a range of other behavioral tendencies long considered irrational biases in the research literature on judgment and decision making (e.g., sunk cost bias; see Dorison, Umphres, & Lerner, 2021).

An important clarification of our claims merits note. We are not claiming that it is always rational to be biased just because others are. For example, it would be quite odd to claim that someone is rational for believing that eating sand provides enough nutrients to survive, simply because others may like them for holding this belief or because others in their immediate social circle hold this belief. In this admittedly bizarre case, it would still be clearly irrational to attempt to subsist on sand, even if there are reputational advantages to doing so—that is, the costs substantially outweigh the reputational benefits. In fact, the vast majority of framing effect studies in the lab do not have an explicit reputational/strategic component at all. 

Saturday, January 22, 2022

Social threat indirectly increases moral condemnation via thwarting fundamental social needs

Henderson, R.K., Schnall, S. 
Sci Rep 11, 21709 (2021).


Individuals who experience threats to their social needs may attempt to avert further harm by condemning wrongdoers more severely. Three pre-registered studies tested whether threatened social esteem is associated with increased moral condemnation. In Study 1 (N = 381) participants played a game in which they were socially included or excluded and then evaluated the actions of moral wrongdoers. We observed an indirect effect: Exclusion increased social needs-threat, which in turn increased moral condemnation. Study 2 (N = 428) was a direct replication, and also showed this indirect effect. Both studies demonstrated the effect across five moral foundations, and was most pronounced for harm violations. Study 3 (N = 102) examined dispositional concerns about social needs threat, namely social anxiety, and showed a positive correlation between this trait and moral judgments. Overall, results suggest threatened social standing is linked to moral condemnation, presumably because moral wrongdoers pose a further threat when one’s ability to cope is already compromised.

From the General Discussion

These findings indicating that social threat is associated with harsher moral judgments suggest that various threats to survival can influence assessments of moral wrongdoing. Indeed, it has been proposed that the reason social exclusion reliably results in negative emotions is because social disconnectedness has been detrimental throughout human societies. As we found in Studies 1 and 2 and consistent with prior research even brief exclusion via a simulated computer game can thwart fundamental social needs. Taken together, these experimental and correlational findings suggest that an elevated sense of danger appears to fortify moral judgment, because when safety is compromised, wrongdoers represent yet another source of potential danger. As a consequence, vulnerable individuals may be motivated to condemn moral violations more harshly. Interestingly, the null finding for loneliness suggests that amplified moral condemnation is not associated with having no social connections in the first place, but rather, with the existence or prospect of social threat. Relatedly, prior research has shown that greater cortisol release is associated with social anxiety but not with loneliness indicating that the body’s stress response does not react to loneliness in the same way as it does to social threat.

Thursday, December 16, 2021

The hidden ‘replication crisis’ of finance

Robin Wigglesworth 
Financial Times
Originally published 15 NOV 2021

Here is an excerpt:

Is investing suffering from something similar?

That is the incendiary argument of Campbell Harvey, professor of finance at Duke university. He reckons that at least half of the 400 supposedly market-beating strategies identified in top financial journals over the years are bogus. Worse, he worries that many fellow academics are in denial about this.

“It’s a huge issue,” he says. “Step one in dealing with the replication crisis in finance is to accept that there is a crisis. And right now, many of my colleagues are not there yet.”

Harvey is not some obscure outsider or performative contrarian attempting to gain attention through needless controversy. He is the former editor of the Journal of Finance, a former president of the American Finance Association, and an adviser to investment firms like Research Affiliates and Man Group.


Obviously, the stakes of the replication crisis are much higher in medicine, where lives can be in play. But it is not something that remains confined to the ivory towers of business schools, as investment groups often smell an opportunity to sell products based on apparently market-beating factors, Harvey argues. “It filters into the real world,” he says. “It definitely makes it into people’s portfolios.”

Saturday, August 7, 2021

Character-Infused Ethical Decision Making

Nguyen, B., Crossan, M. 
J Bus Ethics (2021). 


Despite a growing body of research by management scholars to understand and explain failures in ethical decision making (EDM), misconduct prevails. Scholars have identified character, founded in virtue ethics, as an important perspective that can help to address the gap in organizational misconduct. While character has been offered as a valid perspective in EDM, current theorizing on how it applies to EDM has not been well developed. We thus integrate character, founded in virtue ethics, into Rest’s (1986) EDM model to reveal how shifting attention to the nature of the moral agent provides critical insights into decision making more broadly and EDM specifically. Virtue ethics provides a perspective on EDM that acknowledges and anticipates uncertainties, considers its contextual constraints, and contemplates the development of the moral agent. We thus answer the call by many scholars to integrate character in EDM in order to advance the understanding of the field and suggest propositions for how to move forward. We conclude with implications of a character-infused approach to EDM for future research.

From the Conclusion

As described at the outset, misconduct occurs in every facet of organizational life from the individual to the collective, at a localized and global scale, and covers inappropriate action in the private (2008 financial crisis), public/government (Panama Papers), academic institutions (Varsity Blues Scandal), and even touches not-for-profit organizations (IOC doping scandal). Character highlights the fact that misconduct often arises from “too much of a good thing” (Antonakis et al., 2017); when one or a set of character dimensions are privileged over the others, leading to deficiencies in those undervalued dimensions. For example, those who contributed to the financial crisis and the Panama Paper scandal were likely high on drive but deficient in justice and humanity. Thus, future research could insert character into the equation to better understand the nature of misconduct.

Saturday, April 24, 2021

Bias Blind Spot: Structure, Measurement, and Consequences

Irene Scopelliti,  et al.
Management Science 61(10):


People exhibit a bias blind spot: they are less likely to detect bias in themselves than in others. We report the development and validation of an instrument to measure individual differences in the propensity to exhibit the bias blind spot that is unidimensional, internally consistent, has high test-retest reliability, and is discriminated from measures of intelligence, decision-making ability, and personality traits related to self-esteem, self-enhancement, and self-presentation. The scale is predictive of the extent to which people judge their abilities to be better than average for easy tasks and worse than average for difficult tasks, ignore the advice of others, and are responsive to an intervention designed to mitigate a different judgmental bias. These results suggest that the bias blind spot is a distinct metabias resulting from naïve realism rather than other forms of egocentric cognition, and has unique effects on judgment and behavior.


We find that bias blind spot is a latent factor in self-assessments of relative vulnerability to bias. This meta-bias affected the majority of participants in our samples, but exhibited considerable variance across
participants. We present a concise, reliable, and valid measure of individual differences in bias blind spot
that has the ability to predict related biases in self-assessment, advice taking, and responsiveness to bias
reduction training. Given the influence of bias blind spot on consequential judgments and decisions, as
well as receptivity to training, this measure may prove useful across a broad range of domains such as personnel assessment, information analysis, negotiation, consumer decision making, and education.

Monday, January 18, 2021

We Decoded The Symbols From The Storming Of The Capitol

We looked through hours of footage from the Capitol riot to decode the symbols that Trump supporters brought with them, revealing some ongoing threats to US democracy.

Tuesday, September 29, 2020

Do We Listen to Advice Just Because We Paid for It? The Impact of Cost of Advice on Its Use

Gino, F. (2008). 
Organizational Behavior and Human 
Decision Processes, 107(2), 234–245. 


When facing a decision, people often rely on advice received from others. Previous studies have shown that people tend to discount others' opinions. Yet, such discounting varies according to several factors. This paper isolates one of these factors: the cost of advice. Specifically, three experiments investigate whether the cost of advice, independent of its quality, affects how people use advice. The studies use the Judge-Advisor System (JAS) to investigate whether people value advice from others more when it costs money than when it is free, and examine the psychological processes that could account for this effect. The results show that people use paid advice significantly more than free advice and suggest that this effect is due to the same forces that have been documented in the literature to explain the sunk costs fallacy. Implications for circumstances under which people value others' opinions are discussed.

From the Discussion

Many of the decisions people make on a daily basis result from weighing their own opinions with advice from other sources. The present work explored one factor that might affect the use of advice: advice cost. In particular, the initial hypothesis was that, independent of its quality, people would weigh advice significantly more when it costs money than when it is free. This hypothesis was tested in three experiments requiring participants to answer questions about US history with or without advice from others.  The results of the studies show that participants relied more heavily on advice when it cost money than when it was free. The results also suggest that this paid-advice effect is due to the same forces that have been documented in the literature to explain prior instances of the sunk costs fallacy.  

The cost of advice affected the degree to which participants used advice but did not affect the value gained by following advice. In the studies, advice came from another participant who was randomly chosen on a question-by-question basis. On average, advisors were as equally informed or knowledgeable as judges. In fact, individuals who were history experts could not participate in the studies. Moreover, participants had no opportunity to assess the accuracy of advisors’ estimates. Nor had they the opportunity to assess the accuracy of their own estimates, as no performance feedback was provided. When advice cost money, participants weighed their personal opinions less than others’. When advice was free, they instead weighed their personal opinions more than others’.

Monday, July 13, 2020

Our Minds Aren’t Equipped for This Kind of Reopening

The Atlantic
Originally published 6 July 20

Here is the conclusion:

At the least, government agencies must promulgate clear, explicit norms and rules to facilitate cooperative choices. Most people congregating in tight spaces are telling themselves a story about why what they are doing is okay. Such stories flourish under confusing or ambivalent norms. People are not irrevocably chaotic decision makers; the level of clarity in human thinking depends on how hard a problem is. I know with certainty whether I’m staying home, but the confidence interval around “I am being careful” is really wide. Concrete guidance makes challenges easier to resolve. If masks work, states and communities should require them unequivocally. Cognitive biases are the reason to mark off six-foot spaces on the supermarket floor or circles in the grass at a park.

For social-distancing shaming to be a valuable public-health tool, average citizens should reserve it for overt defiance of clear official directives—failure to wear a mask when one is required—rather than mere cases of flawed judgment. In the meantime, money and power are located in public and private institutions that have access to public-health experts and the ability to propose specific behavioral norms. The bad judgments that really deserve shaming include the failure to facilitate testing, failure to protect essential workers, failure to release larger numbers of prisoners from facilities that have become COVID-19 hot spots, and failure to create the material conditions that permit strict isolation. America’s half-hearted reopening is a psychological morass, a setup for defeat that will be easy to blame on irresponsible individuals while culpable institutions evade scrutiny.

The info is here.

Friday, June 19, 2020

Better Minds, Better Morals A Procedural Guide to Better Judgment

Schaefer GO, Savulescu J.
J Posthum Stud. 2017;1(1):26‐43.


Making more moral decisions - an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept.Rather, we offer a theory of procedural reliability: a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we're employing are sound and reliable. We identify four broad categories of judgment to be targeted - cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A pdf is here.

Thursday, May 7, 2020

What Is 'Decision Fatigue' and How Does It Affect You?

Rachel Fairbank
Originally published 14 April 20

Here is an excerpt:

Too many decisions result in emotional and mental strain

“These are legitimately difficult decisions,” Fischhoff says, adding that people shouldn’t feel bad about struggling with them. “Feeling bad is adding insult to injury,” he says.

This added complexity to our decisions is leading to decision fatigue, which is the emotional and mental strain that comes when we are forced to make too many choices. Decision fatigue is the reason why thinking through a decision is harder when we are stressed or tired.

“These are difficult decisions because the stakes are often really high, while we are required to master unfamiliar information,” Fischhoff says.

But if all of this sounds like too much, there are actions we can take to reduce decision fatigue. For starters, it’s best to minimize the number of small decisions, such as what to eat for dinner or what to wear, you make in a day. The fewer smaller decisions you have to make, the more bandwidth you’ll have for the bigger one.

For this particular crisis, there are a few more steps you can take, in order to reduce your decision fatigue.

The info is here.

Tuesday, May 5, 2020

How stress influences our morality

Lucius Caviola and Nadira Faulmüller
Oxford Martin School


Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than nonstressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

From the Implications Section

The conclusions drawn from these studies seem to raise an important question: if our moral judgments are so dependent on stress, which of our judgments should we rely on—the ones elicited by stress or the ones we come to after careful consideration? Most people would probably not regard a physiological reaction, such as stress, as a relevant normative factor that should have a qualified influence on our moral values. Instead, our reflective moral judgments seem to represent better what we really care about. This should make us suspicious of the normative validity of emotional intuitions in general. Thus, in order to identify our moral values, we should not blindly follow our gut reactions, but try to think more deliberately about what we care about.

For example, as stated we might be more prone to help a poor beggar on the street when we are stressed. Here, even after careful reflection we might come to the conclusion that this emotional reaction elicited by stress is the morally right thing to do after all. However, in other situations this might not be the case. As we have seen we are less prone to donate money to charity when stressed (cf. Vinkers et al., 2013). But is this reaction really in line with what we consider to be the morally right thing to do after careful reflection? After all, if we care about the well-being of the single beggar, why then should the many more people’s lives, potentially benefiting from our donation, count less?

The research is here.

Tuesday, March 10, 2020

The Perils of “Survivorship Bias”

Katy Milkman
Scientific American
Originally posted 11 Feb 20

Here is an excerpt:

My colleagues and I, we’ve been spending a lot of time looking at medical decision-making. Say you walk into an emergency room, and you might or might not be having a heart attack. If I test you, I learn whether I’m making a good decision or not. But if I say, “It’s unlikely, so I’ll just send her home,” it’s almost the opposite of survivorship bias. I never get to learn if I made a good decision. And this is supercommon, not just in medicine but in every profession.

Similarly, there was a work done that showed that people who had car accidents were also more likely to have cancer. It was kind of a puzzle until you think, “Wait, who do we measure cancer in?” We don’t measure cancer in everybody. We measure cancer in people who have been tested. And who do we test? We test people who are in hospitals. So someone goes to the hospital for a car accident, and then I do an MRI and find a tumor. And now that leads to car accidents appearing to elevate the level of tumors. So anything that gets you into hospitals raises your “cancer rate,” but that’s not your real cancer rate.

That’s one of my favorite examples, because it really illustrates how even with something like cancer, we’re not actually measuring it without selection bias, because we only measure it in a subset of the population.

How can people avoid falling prey to these kinds of biases?

Look at your life and where you get feedback and ask, “Is that feedback selected, or am I getting unvarnished feedback?”

Whatever the claim—it could be “I’m good at blank” or “Wow, we have a high hit rate” or any sort of assessment—then you think about where the data comes from. Maybe it’s your past successes. And this is the key: Think about what the process that generated the data is. What are all the other things that could have happened that might have led me to not measure it? In other words, if I say, “I’m great at interviewing,” you say, “Okay. Well, what data are you basing that on?” “Well, my hires are great.” You can counter with, “Have you considered the people who you have not hired?”

The info is here.

Monday, March 2, 2020

Folk standards of sound judgment: Rationality vs. Reasonableness

Igor Grossman and others
PsyArXiv Preprints
Last edited on 10 Jan 20


Normative theories of judgment either focus on rationality – decontextualized preference maximization, or reasonableness – the pragmatic balance of preferences and socially-conscious norms. Despite centuries of work on such concepts, a critical question appears overlooked: How do people’s intuitions and behavior align with the concepts of rationality from game theory and reasonableness from legal scholarship? We show that laypeople view rationality as abstract and preference-maximizing, simultaneously viewing reasonableness as social-context-sensitive and socially-conscious, as evidenced in spontaneous descriptions, social perceptions, and linguistic analyses of the terms in cultural products (news, soap operas, legal opinions, and Google books). Further, experiments among North Americans and Pakistani bankers, street merchants, and samples engaging in exchange (vs. market-) economy show that rationality and reasonableness lead people to different conclusions about what constitutes good judgment in Dictator Games, Commons Dilemma and Prisoner’s Dilemma: Lay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.

The research is here.