Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Naive Realism. Show all posts
Showing posts with label Naive Realism. Show all posts

Wednesday, October 4, 2023

Humans’ Bias Blind Spot and Its Societal Significance

Pronin, E., & Hazel, L. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Human beings have a bias blind spot. We see bias all around us but sometimes not in ourselves. This asymmetry hinders self-knowledge and fuels interpersonal misunderstanding and conflict. It is rooted in cognitive mechanics differentiating self- and social perception as well as in self-esteem motives. It generalizes across social, cognitive, and behavioral biases; begins in childhood; and appears across cultures. People show a bias blind spot in high-stakes contexts, including investing, medicine, human resources, and law. Strategies for addressing the problem are described.

(cut)

Bias-limiting procedures

When it comes to eliminating bias, attempts to overcome it via conscious effort and educational training are not ideal. A different strategy is worth considering, when possible: preventing people’s biases from having a chance to operate in the first place, by limiting their access to biasing information. Examples include conducting auditions behind a screen (discussed earlier) and blind review of journal submissions. If fully blocking access to potentially biasing information is not possible or carries more costs than benefits, another less stringent option is worth considering, that is, controlling when the information is presented so that potentially biasing information comes late, ideally after a tentative judgment is made (e.g., “sequential unmasking”; Dror, 2018; “temporary cloaking”; Kang, 2021).

Because of the BBS, people can be resistant to procedures like this that limit their access to biasing information (see Fig. 3). For example, forensics experts prefer consciously trying to avoid bias over being shielded from even irrelevant biasing information (Kukucka et al., 2017). When high school teachers and ensemble singers were asked to assess blinding procedures (in auditioning and grading), they opposed them more for their own group than for the other group and even more for themselves personally (Pronin et al., 2022). This opposition is consistent with experiments showing that people are unconcerned about the effects of biasing decision processes when it comes to their own decisions (Hansen et al., 2014). In those experiments, participants made judgments using a biasing decision procedure (e.g., judging the quality of paintings only after looking to see if someone famous painted them). They readily acknowledged that the procedure was biased, nonetheless made decisions that were biased by that procedure, and then insisted that their conclusions were objective. This unwarranted confidence is a barrier to the self-imposition of bias-reducing procedures. It suggests the need for adopting procedures like this at the policy level rather than counting on individuals or their organizations to do so.

A different bias-limiting procedure that may induce resistance for these same reasons, and that therefore may also benefit from institutional or policy-level implementation, involves precommitting to decision criteria (e.g., Norton et al., 2004; Uhlmann & Cohen, 2005). For example, the human resources officer who precommits to judging job applicants more on the basis of industry experience versus educational background cannot then change that emphasis after seeing that their favorite candidate has unusually impressive academic credentials. This logic is incorporated, for example, into the system of allocating donor organs in the United States, which has explicit and predetermined criteria for making those allocations in order to avoid the possibility of bias in this high-stakes arena. When decision makers are instructed to provide objective criteria for their decision not before making that decision but rather when providing it—that is, the more typical request made of them—this not only makes bias more likely but also, because of the BBS, may even leave decision makers more confident in their objectivity than if they had not been asked to provide those criteria at all.

Here's my brief summary:

The article discusses the concept of the bias blind spot, which refers to people's tendency to recognize bias in others more readily than in themselves. Studies have consistently shown that people rate themselves as less susceptible to various biases than the average person. The bias blind spot occurs even for well-known biases that people readily accept exist. This blind spot has important societal implications, as it impedes recognition of one's own biases. It also leads to assuming others are more biased than oneself, resulting in decreased trust. Overcoming the bias blind spot is challenging but important for issues from prejudice to politics. It requires actively considering one's own potential biases when making evaluations about oneself or others.

Sunday, May 30, 2021

Win–Win Denial: The Psychological Underpinnings of Zero-Sum Thinking

Johnson, S. G. B., Zhang, J., & Keil, F. 
(2020, April 30).
https://doi.org/10.31234/osf.io/efs5y

Abstract

A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across 4 studies (and 8 further studies in the Supplementary Materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterwards. These studies revealed that win–win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win–win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and theory of mind limits (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society.

(cut)

From the Discussion

Is Win–Win Denial Rational?

The conclusion that voluntary transactions benefit both parties rests on assumptions, and can therefore admit exceptions when these assumptions do not hold.  Voluntary trades are mutually beneficial when the parties are performing rational, selfish cost–benefit calculations and when there are no critical asymmetries in information (e.g., fraud).  There are several ways that violations of these assumptions could lead a transaction not to be win–win.  Consumers  could have inconsistent preferences over time, such that something believed to be beneficial at one time proves non-beneficial later on (e.g., liking a shirt when one buys it in the store, but growing weary of it after a couple months). Consumers could have self-control failures, making an impulse purchase that proved unwise in the longer  term.  Consumers could  have other-regarding  preferences, buying something that benefits someone else but not oneself. Finally, the consumer could be deceived by a seller who knows that the product will not satisfy their preferences (e.g., a crooked used-car salesman).

These  are  of  course  more  than  theoretical  possibilities—many demonstrations of human irrationality have been demonstrated in lab and field studies (Frederick et al., 2009; Loewenstein & Prelec, 1992; Malmandier & Tate, 2005 among many others). The key question is whether the real-world prevalence of irrationality and fraud is sufficient to justify the conclusion that ordinary consumer transactions—like those tested here—are so riddled with incompetence that our participants were right to deny that transactions are typically win–win. We respond to this challenge with four points. 

First, an empirical point. It is not just the magnitude of win–win denial of interest here, but how this magnitude responds to our experimental manipulations. It is hard to see how the effects of time-framing or cueing participants to buyers’ reasons would produce the effects that they do, independent of the mechanisms we have proposed for win–win denial (namely mercantilism and theory of mind). It is especially difficult to see why people would claim that barters make neither party better-off if the issue is exploitation. Thus, even if the magnitude of the effects is reasonable in some conditions of some of our experiments because people’s intuitions are attuned to the (allegedly) large extent of market failures, some of the patterns we see and the differences in these patterns across conditions seem to necessitate the mechanisms we propose.

Second, a sanity check. We tested intuitions about a range of typical consumer transactions in our items, finding consistent effects across items (see Part A of the Supplementary Materials). Is it really that plausible that people are impulsively hiring plumbers or that their hair stylists are routinely fraudsters? If such ordinary transactions are actually making consumers worse-off, it is very difficult to see how the rise of market economies has brought prosperity to much of the world—indeed, if win–win denial correctly  describes most consumer transactions, one should predict a negative relationship between well-being and economic activity (contradicting the large association between subjective well-being and per capita income across countries; Stevenson & Wolfers, 2013). In our view, one can acknowledge occasional consumer irrationalities, while not thereby concluding that all or most market activity is irrational, which, we submit, would fly in the face both of economic science and common sense. Actually, to claim that consumers are consistently irrational threatens paradox: The more one thinks that consumers are irrational in general,  the more  one  must  believe that participants in the current experiments are (rationally) attuned to their own irrationality.