Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Judgments. Show all posts
Showing posts with label Judgments. Show all posts

Wednesday, May 31, 2023

Can AI language models replace human participants?

Dillon, D, Tandon, N., Gu, Y., & Gray, K.
Trends in Cognitive Sciences
May 10, 2023

Abstract

Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.

(cut)

Does GPT make human-like judgments?

We initially doubted the ability of LLMs to capture human judgments but, as we detail in Box 1, the moral judgments of GPT-3.5 were extremely well aligned with human moral judgments in our analysis (r= 0.95;
full details at https://nikett.github.io/gpt-as-participant). Human morality is often argued to be especially difficult for language models to capture and yet we found powerful alignment between GPT-3.5 and human judgments.

We emphasize that this finding is just one anecdote and we do not make any strong claims about the extent to which LLMs make human-like judgments, moral or otherwise. Language models also might be especially good at predicting moral judgments because moral judgments heavily hinge on the structural features of scenarios, including the presence of an intentional agent, the causation of damage, and a vulnerable victim, features that language models may have an easy time detecting.  However, the results are intriguing.

Other researchers have empirically demonstrated GPT-3’s ability to simulate human participants in domains beyond moral judgments, including predicting voting choices, replicating behavior in economic games, and displaying human-like problem solving and heuristic judgments on scenarios from cognitive
psychology. LLM studies have also replicated classic social science findings including the Ultimatum Game and the Milgram experiment. One company (http://syntheticusers.com) is expanding on these
findings, building infrastructure to replace human participants and offering ‘synthetic AI participants’
for studies.

(cut)

From Caveats and looking ahead

Language models may be far from human, but they are trained on a tremendous corpus of human expression and thus they could help us learn about human judgments. We encourage scientists to compare simulated language model data with human data to see how aligned they are across different domains and populations.  Just as language models like GPT may help to give insight into human judgments, comparing LLMs with human judgments can teach us about the machine minds of LLMs; for example, shedding light on their ethical decision making.

Lurking under the specific concerns about the usefulness of AI language models as participants is an age-old question: can AI ever be human enough to replace humans? On the one hand, critics might argue that AI participants lack the rationality of humans, making judgments that are odd, unreliable, or biased. On the other hand, humans are odd, unreliable, and biased – and other critics might argue that AI is just too sensible, reliable, and impartial.  What is the right mix of rational and irrational to best capture a human participant?  Perhaps we should ask a big sample of human participants to answer that question. We could also ask GPT.

Monday, December 27, 2021

An interaction effect of norm violations on causal judgment

Gill, M., Kominsky, J. F., 
Icard, T., & Knobe, J. (2021, October 19).

Abstract

Existing research has shown that norm violations influence causal judgments, and a number of different models have been developed to explain these effects. One such model, the necessity/sufficiency model, predicts an interaction pattern in people's judgments. Specifically, it predicts that when people are judging the degree to which a particular factor is a cause, there should be an interaction between (a) the degree to which that factor violates a norm and (b) the degree to which another factor in the situation violates norms. A study of moral norms (N = 1000) and norms of proper functioning (N = 3000) revealed robust evidence for the predicted interaction effect. The implications of these patterns for existing theories of causal judgments is discussed.

General discussion

Two experiments revealed a novel interaction effect of norm violations on causal judgment. First, the experiments replicated two basic phenomena: a focal event is rated as more causal when it is bad (“inflation”) and a focal event is rated less causal when the alternative event is bad (“supersession”). Critically, the experiments showed that (1) the difference in causal ratings of the focal event when it is good vs. bad increases when the alternative event is bad (“inflation increase”) and (2) the difference in causal ratings of the focal event when the alternative event is bad vs.good decreases when the focal event is bad (“supersession decrease”).  

Experiment 1 yielded this novel interaction effect in the context of moral norm violations (e.g., stealing a book from the library). Experiment 2 showed that the effect generalized to violations of norms of proper functioning (e.g., a part of a machine working incorrectly).

This interaction pattern is predicted by the necessity/sufficiency model (Icard et al.,2017). The success of this prediction is especially striking, in that the necessity/sufficiency model was not created with this interaction in mind. Rather, the model was originally created to explain inflation and supersession, and it was only noticed later that this model predicts an interaction in cases of this type.

Saturday, December 4, 2021

Virtuous Victims

Jordan, Jillian J., and Maryam Kouchaki
Science Advances 7, no. 42 (October 15, 2021).

Abstract

How do people perceive the moral character of victims? We find, across a range of transgressions, that people frequently see victims of wrongdoing as more moral than nonvictims who have behaved identically. Across 17 experiments (total n = 9676), we document this Virtuous Victim effect and explore the mechanisms underlying it. We also find support for the Justice Restoration Hypothesis, which proposes that people see victims as moral because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these “justice-restorative” actions. Our results validate predictions of this hypothesis and suggest that the Virtuous Victim effect does not merely reflect (i) that victims look good in contrast to perpetrators, (ii) that people are generally inclined to positively evaluate those who have suffered, or (iii) that people hold a genuine belief that victims tend to be people who behave morally.

Discussion

Across 17 experiments (total n = 9676), we have documented and explored the Virtuous Victim effect. We find that victims are frequently seen as more virtuous than nonvictims—not because of their own behavior, but because others have mistreated them. We observe this effect across a range of moral transgressions and find evidence that it is not moderated by the victim’s (white versus black) race or gender. Humans ubiquitously—and perhaps increasingly (1, 2)—encounter narratives about immoral acts and their victims. By demonstrating that these narratives have the power to confer moral status, our results shed new light on the ways that victims are perceived by society.

We have also explored the boundaries of the Virtuous Victim effect and illuminated the mechanisms that underlie it. For example, we find that the Virtuous Victim effect may be especially likely to flow from victim narratives that describe a transgression’s perpetrator and are presented by a third-person narrator (or perhaps, more generally, a narrator who is unlikely to be doubted). We also find that the effect is specific to victims of immorality (i.e., it does not extend to accident victims) and to moral virtue (i.e., it does not extend equally to positive but nonmoral traits). Furthermore, the effect shapes perceptions of moral character but not predictions about moral behavior.

We have also evaluated several potential explanations for the Virtuous Victim effect. Ultimately, our results provide evidence for the Justice Restoration Hypothesis, which proposes that people see victims as virtuous because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these justice-restorative actions.

Sunday, November 21, 2021

Moral labels increase cooperation and costly punishment in a Prisoner’s Dilemma game with punishment option

Mieth, L., Buchner, A. & Bell, R.
Sci Rep 11, 10221 (2021). 
https://doi.org/10.1038/s41598-021-89675-6

Abstract

To determine the role of moral norms in cooperation and punishment, we examined the effects of a moral-framing manipulation in a Prisoner’s Dilemma game with a costly punishment option. In each round of the game, participants decided whether to cooperate or to defect. The Prisoner’s Dilemma game was identical for all participants with the exception that the behavioral options were paired with moral labels (“I cooperate” and “I cheat”) in the moral-framing condition and with neutral labels (“A” and “B”) in the neutral-framing condition. After each round of the Prisoner’s Dilemma game, participants had the opportunity to invest some of their money to punish their partners. In two experiments, moral framing increased moral and hypocritical punishment: participants were more likely to punish partners for defection when moral labels were used than when neutral labels were used. When the participants’ cooperation was enforced by their partners’ moral punishment, moral framing did not only increase moral and hypocritical punishment but also cooperation. The results suggest that moral framing activates a cooperative norm that specifically increases moral and hypocritical punishment. Furthermore, the experience of moral punishment by the partners may increase the importance of social norms for cooperation, which may explain why moral framing effects on cooperation were found only when participants were subject to moral punishment.

General discussion

In human social life, a large variety of behaviors are regulated by social norms that set standards on how individuals should behave. One of these norms is the norm of cooperation. In many situations, people are expected to set aside their egoistic interests to achieve the collective best outcome. Within economic research, cooperation is often studied in social dilemma games. In these games, the complexities of human social interactions are reduced to their incentive structures. However, human behavior is not only determined by monetary incentives. There are many other important determinants of behavior among which social norms are especially powerful. The participants’ decisions in social dilemma situations are thus affected by their interpretation of whether a certain behavior is socially appropriate or inappropriate. Moral labels can help to reduce the ambiguity of the social dilemma game by creating associations to real-life cooperation norms. Thereby, the moral framing may support a moral interpretation of the social dilemma situation, resulting in the moral rejection of egoistic behaviors. Often, social norms are enforced by punishment. It has been argued “that the maintenance of social norms typically requires a punishment threat, as there are almost always some individuals whose self-interest tempts them to violate the norm” [p. 185]. 

Thursday, October 21, 2021

How Disgust Affects Social Judgments

Inbar, Y., & Pizarro, D.
(2021, September 7). 

Abstract

The emotion of disgust has been claimed to affect a diverse array of social judgments, including moral condemnation, inter-group prejudice, political ideology, and much more. We attempt to make sense of this large and varied literature by reviewing the theory and research on how and why disgust influences these judgments. We first describe two very different perspectives adopted by researchers on why disgust should affect social judgment. The first is the pathogen-avoidance account, which sees the relationship between disgust and judgment as resulting from disgust’s evolved function as a pathogen-avoidance mechanism. The second is the extended disgust account, which posits that disgust functions much more broadly to address a range of other threats and challenges. We then review the empirical evidence to assess how well it supports each of these perspectives, arguing that there is more support for the pathogen-avoidance account than the extended account. We conclude with some testable empirical predictions that can better distinguish between these two perspectives.

Conclusion

We have described two very different perspectives on disgust that posit very different explanations for its role in social judgments. In our view, the evidence currently supports the pathogen-avoidance account over the extended-disgust alternative, but the question is best settled by future research explicitly designed to differentiate the two perspectives.

Wednesday, October 6, 2021

Immoral actors’ meta-perceptions are accurate but overly positive

Lees, J. M., Young, L., & Waytz, A.
(2021, August 16).
https://doi.org/10.31234/osf.io/j24tn

Abstract

We examine how actors think others perceive their immoral behavior (moral meta-perception) across a diverse set of real-world moral violations. Utilizing a novel methodology, we solicit written instances of actors’ immoral behavior (N_total=135), measure motives and meta-perceptions, then provide these accounts to separate samples of third-party observers (N_total=933), using US convenience and representative samples (N_actor-observer pairs=4,615). We find that immoral actors can accurately predict how they are perceived, how they are uniquely perceived relative to the average immoral actor, and how they are misperceived. Actors who are better at judging the motives of other immoral actors also have more accurate meta-perceptions. Yet accuracy is accompanied by two distinct biases: overestimating the positive perceptions others’ hold, and believing one’s motives are more clearly perceived than they are. These results contribute to a detailed account of the multiple components underlying both accuracy and bias in moral meta-perception.

From the General Discussion

These results collectively suggest that individuals who have engaged in immoral behavior can accurately forecast how others will react to their moral violations.  

Studies 1-4 also found similar evidence for accuracy in observers’ judgments of the unique motives of immoral actors, suggesting that individuals are able to successfully perspective-take with those who have committed moral violations. Observers higher in cognitive ability (Studies 2-3) and empathic concern (Studies 2-4) were consistently more accurate in these judgments, while observers higher in Machiavellianism (Studies 2-4) and the propensity to engage in unethical workplace behaviors (Studies 3-4) were consistently less accurate. This latter result suggests that more frequently engaging in immoral behavior does not grant one insight into the moral minds of others, and in fact is associated with less ability to understand the motives behind others’ immoral behavior.

Despite strong evidence for meta-accuracy (and observer accuracy) across studies, actors’ accuracy in judging how they would be perceived was accompanied by two judgment biases.  Studies 1-4 found evidence for a transparency bias among immoral actors (Gilovich et al., 1998), meaning that actors overestimated how accurately observers would perceive their self-reported moral motives. Similarly, in Study 4 an examination of actors’ meta-perception point estimates found evidence for a positivity bias. Actors systematically overestimate the positive attributions, and underestimate the negative attributions, made of them and their motives. In fact, the single meta-perception found to be the most inaccurate in its average point estimate was the meta-perception of harm caused, which was significantly underestimated.

Monday, June 14, 2021

Bias Is a Big Problem. But So Is ‘Noise.’

Daniel Kahneman, O. Sibony & C.R. Sunstein
The New York Times
Originally posted 15 May 21

Here is an excerpt:

There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). 

Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. 

We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). 

Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. 

As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. 

Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. 

Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Wednesday, May 26, 2021

Before You Answer, Consider the Opposite Possibility—How Productive Disagreements Lead to Better Outcomes

Ian Leslie
The Atlantic
Originally published 25 Apr 21

Here is an excerpt:

This raises the question of how a wise inner crowd can be cultivated. Psychologists have investigated various methods. One, following Stroop, is to harness the power of forgetting. Reassuringly for those of us who are prone to forgetting, people with poor working memories have been shown to have a wiser inner crowd; their guesses are more independent of one another, so they end up with a more diverse set of estimates and a more accurate average. The same effect has been achieved by spacing the guesses out in time.

More sophisticated methods harness the mind’s ability to inhabit different perspectives and look at a problem from more than one angle. People generate more diverse estimates when prompted to base their second or third guess on alternative assumptions; one effective technique is simply asking people to “consider the opposite” before giving a new answer. A fascinating recent study in this vein harnesses the power of disagreement itself. A pair of Dutch psychologists, Philippe Van de Calseyde and Emir Efendić, asked people a series of questions with numerical answers, such as the percentage of the world’s airports located in the U.S.. Then they asked participants to think of someone in their life with whom they often disagreed—that uncle with whom they always argue about politics—and to imagine what that person would guess.

The respondents came up with second estimates that were strikingly different from their first estimate, producing a much more accurate inner crowd. The same didn’t apply when they were asked to imagine how someone they usually agree with would answer the question, which suggests that the secret is to incorporate the perspectives of people who think differently from us. That the respondents hadn’t discussed that particular question with their disagreeable uncle did not matter. Just the act of thinking about someone with whom they argued a lot was enough to jog them out of habitual assumptions.

Sunday, November 8, 2020

Where loneliness can lead

Samantha Rose Hill
aeon.co
Originally published 16 Oct 20

Here is an excerpt:

Why loneliness is not obvious.

Arendt’s answer was: because loneliness radically cuts people off from human connection. She defined loneliness as a kind of wilderness where a person feels deserted by all worldliness and human companionship, even when surrounded by others. The word she used in her mother tongue for loneliness was Verlassenheit – a state of being abandoned, or abandon-ness. Loneliness, she argued, is ‘among the most radical and desperate experiences of man’, because in loneliness we are unable to realise our full capacity for action as human beings. When we experience loneliness, we lose the ability to experience anything else; and, in loneliness, we are unable to make new beginnings.

In order to illustrate why loneliness is the essence of totalitarianism and the common ground of terror, Arendt distinguished isolation from loneliness, and loneliness from solitude. Isolation, she argued, is sometimes necessary for creative activity. Even the mere reading of a book, she says requires some degree of isolation. One must intentionally turn away from the world to make space for the experience of solitude but, once alone, one is always able to turn back.

Totalitarianism uses isolation to deprive people of human companionship, making action in the world impossible, while destroying the space of solitude. The iron-band of totalitarianism, as Arendt calls it, destroys man’s ability to move, to act, and to think, while turning each individual in his lonely isolation against all others, and himself. The world becomes a wilderness, where neither experience nor thinking are possible.

Thursday, October 29, 2020

Probabilistic Biases Meet the Bayesian Brain.

Chater N, et al.
Current Directions in Psychological Science. 
2020;29(5):506-512. 
doi:10.1177/0963721420954801

Abstract

In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.

Introduction

Human probabilistic reasoning gets bad press. Decades of brilliant experiments, most notably by Daniel Kahneman and Amos Tversky (e.g., Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982), have shown a plethora of ways in which people get into a terrible muddle when wondering how probable things are. Every psychologist has learned about anchoring, conservatism, the representativeness heuristic, and many other ways that people reveal their probabilistic incompetence. Creating probability theory in the first place was incredibly challenging, exercising great mathematical minds over several centuries (Hacking, 1990). Probabilistic reasoning is hard, and perhaps it should not be surprising that people often do it badly. This view is the starting point for the whole field of judgment and decision-making (JDM) and its cousin, behavioral economics.

Oddly, though, human probabilistic reasoning equally often gets good press. Indeed, many psychologists, neuroscientists, and artificial-intelligence researchers believe that probabilistic reasoning is, in fact, the secret of human intelligence.

Thursday, April 23, 2020

We Tend To See Acts We Disapprove Of As Deliberate

Jesse Singal
BPS
Research Digest
Originally published 14 April 20

One of the most important and durable findings in moral and political psychology is that there is a tail-wags-the-dog aspect to human morality. Most of us like to think we have carefully thought-through, coherent moral systems that guide our behaviour and judgments. In reality our behaviour and judgments often stem from gut-level impulses, and only after the fact do we build elaborate moral rationales to justify what we believe and do.

A new paper in the Journal of Personality and Social Psychology examines this issue through a fascinating lens: free will. Or, more specifically, via people’s judgments about how much free will others had when committing various transgressions. The team, led by Jim A. C. Everett of the University of Kent and Cory J. Clark of Durham University, ran 14 studies geared at evaluating the possibility that at least some of the time the moral tail wags the dog: first people decide whether someone is blameworthy, and then judge how much free will they have, in a way that allows them to justify blaming those they want to blame and excusing those they want to excuse.

The researchers examined this hypothesis, for which there is already some evidence, through the lens of American partisan politics. In the paper they note that previous research has shown that conservatives have a greater belief in free will than liberals, and are more moralising in general (that is, they categorise a larger number of acts as morally problematic, and rely on a greater number of principles — or moral foundations — in making these judgments). The first two of the new studies replicated these findings — this is consistent with the idea, put simply, that conservatives believe in free will more because it allows them to level more moral judgments.

The info is here.

Sunday, March 15, 2020

Will Past Criminals Reoffend? (Humans are Terrible at Predicting; Algorithms Worse)

Sophie Bushwick
Scientific American
Originally published 14 Feb 2020

Here is an excerpt:

Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism. “It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says. But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly. “There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”

Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases. For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense. “Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.” Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper. “Part of the problem with this idea that you're going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.

The info is here.

Thursday, March 14, 2019

Actions speak louder than outcomes in judgments of prosocial behavior.

Yudkin, D. A., Prosser, A. M. B., & Crockett, M. J. (2018).
Emotion. Advance online publication.
http://dx.doi.org/10.1037/emo0000514

Abstract

Recently proposed models of moral cognition suggest that people’s judgments of harmful acts are influenced by their consideration both of those acts’ consequences (“outcome value”), and of the feeling associated with their enactment (“action value”). Here we apply this framework to judgments of prosocial behavior, suggesting that people’s judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Sunday, March 3, 2019

When and why people think beliefs are “debunked” by scientific explanations for their origins

Dillon Plunkett, Lara Buchak, and Tania Lombrozo

Abstract

How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.

The research is here.

Friday, November 9, 2018

Believing without evidence is always morally wrong

Francisco Mejia Uribe
aeon.co
Originally posted November 5, 2018

Here are two excerpts:

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

(cut)

Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news peddlers, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. In Clifford’s time, the way in which our beliefs were woven into the ‘precious deposit’ of common knowledge was primarily through speech and writing. Because of this capacity to communicate, ‘our words, our phrases, our forms and processes and modes of thought’ become ‘common property’. Subverting this ‘heirloom’, as he called it, by adding false beliefs is immoral because everyone’s lives ultimately rely on this vital, shared resource.

The info is here.

Monday, September 10, 2018

Cognitive Biases Tricking Your Brain

Ben Yagoda
The Atlantic
September 2018 Issue

Here is an excerpt:

Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves. Instead, it has been devoted to changing behavior, in the form of incentives or “nudges.” For example, while present bias has so far proved intractable, employers have been able to nudge employees into contributing to retirement plans by making saving the default option; you have to actively take steps in order to not participate. That is, laziness or inertia can be more powerful than bias. Procedures can also be organized in a way that dissuades or prevents people from acting on biased thoughts. A well-known example: the checklists for doctors and nurses put forward by Atul Gawande in his book The Checklist Manifesto.

Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative. These experiments are based on the reactions and responses of randomly chosen subjects, many of them college undergraduates: people, that is, who care about the $20 they are being paid to participate, not about modifying or even learning about their behavior and thinking. But what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?

The info is here.

Wednesday, July 25, 2018

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Monday, May 14, 2018

No Luck for Moral Luck

Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018

Abstract

Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments  produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility  judgments. While this is evidence in favor of current dual-process theories of moral  judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral  judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.

The manuscript is here.

Monday, April 9, 2018

Do Evaluations Rise With Experience?

Kieran O’Connor and Amar Cheema
Psychological Science 
First Published March 1, 2018

Abstract

Sequential evaluation is the hallmark of fair review: The same raters assess the merits of applicants, athletes, art, and more using standard criteria. We investigated one important potential contaminant in such ubiquitous decisions: Evaluations become more positive when conducted later in a sequence. In four studies, (a) judges’ ratings of professional dance competitors rose across 20 seasons of a popular television series, (b) university professors gave higher grades when the same course was offered multiple times, and (c) in an experimental test of our hypotheses, evaluations of randomly ordered short stories became more positive over a 2-week sequence. As judges completed repeated evaluations, they experienced more fluent decision making, producing more positive judgments (Study 4 mediation). This seemingly simple bias has widespread and impactful consequences for evaluations of all kinds. We also report four supplementary studies to bolster our findings and address alternative explanations.

The article is here.

Sunday, March 11, 2018

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Zapf, P. A., Kukucka, J., Kassin, S. M., & Dror, I. E.
Psychology, Public Policy, & Law

Abstract

Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Closing statements:

What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making. This is especially important given the courts’ heavy reliance on evaluators’ opinions (see Zapf, Hubbard, Cooper, Wheeles, & Ronan, 2004), the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias (see Kassin et al., 2013), and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony (see Dror, Morgan, Rando, & Nakhaeizadeh, 2017). More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation.

The article is here.