Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Irrational. Show all posts
Showing posts with label Irrational. Show all posts

Tuesday, January 17, 2023

Deeply Rational Reasons for Irrational Beliefs

Barlev, M., & Neuberg, S. L. (2022, December 7).
https://doi.org/10.31234/osf.io/avcq2

Abstract

Why do people hold irrational beliefs? Two accounts predominate. The first spotlights the information ecosystem and how people process this information; this account either casts those who hold irrational beliefs as cognitively deficient or focuses on the reasoning and decision-making heuristics all people use. The second account spotlights an inwardly-oriented and proximate motivation people have to enhance how they think and feel about themselves. Here, we advance a complementary, outwardly-oriented, and more ultimate account—that people often hold irrational beliefs for evolutionarily rational reasons. Under this view, irrational beliefs may serve as rare and valued information with which to rise in prestige, as signals of group commitment and loyalty tests, as ammunition with which to derogate rivals in the eyes of third-parties, or as outrages with which to mobilize the group toward shared goals. Thus, although many beliefs may be epistemically irrational, they may also be evolutionarily rational from the perspective of the functions they are adapted to serve. We discuss the implications of this view for puzzling theoretical phenomena and for changing problematic irrational beliefs.

Conclusions

Why do we hold irrational beliefs that often are not only improbable, but impossible? According to some, the information ecosystem is to blame, paired with deficiencies in how people process information or with heuristic modes of processing. According to others, it is because certain beliefs—regardless of their veracity—can enhance how we think and feel about ourselves. We suggest that such accounts are promising but incomplete: many irrational beliefs exist because they serve crucial interpersonal (and more ultimate rather than proximal) functions.

We have argued that many irrational beliefs are generated, entertained, and propagated by psychological mechanisms specialized for rising in prestige, signaling group commitment and testing group loyalty, derogating disliked competitors in the eyes of third-parties, or spreading common knowledge and coordination toward shared goals. Thus, although many beliefs are epistemically irrational, they can be evolutionarily rational from the perspective of the functions they are adapted to serve.

Is it not costly to individuals to hold epistemically irrational beliefs? Sometimes. Jehovah's Witnesses reject life-saving blood transfusions, a belief most consider to be very costly, explaining why courts sometimes compel blood transfusions such as in the case of children. Yet even here, the benefits to individuals of carrying such costly beliefs may outweigh their costs, at least for some. For example, if such belief are designed to signal group commitment, they might emerge among particularly devout members of groups or among groups in which the need to signal commitment is particularly strong; the costlier the belief, the more honest a signal of group commitment it is (Petersen et al., 2021). However, such cases are the exception—most of the irrational beliefs people hold tend to be inferentially isolated and behaviorally inert. For example, the belief that God the Father, the Son, and the Holy Spirit are one may function for a Christian as a signal of group affiliation and commitment, without carrying for the individual many costly inferences or behavioral implications (Petersen et al., 2021; Mercier, 2020).

Sunday, October 18, 2020

Beliefs have a social purpose. Does this explain delusions?

Anna Greenburgh
psyche.co
Originally published 

Here is an excerpt:

Of course, just because a delusion has logical roots doesn’t mean it’s helpful for the person once it takes hold. Indeed, this is why delusions are an important clinical issue. Delusions are often conceptualised as sitting at the extreme end of a continuum of belief, but how can they be distinguished from other beliefs? If not irrationality, then what demarcates a delusion?

Delusions are fixed, unchanging in the face of contrary evidence, and not shared by the person’s peers. In light of the social function of beliefs, these preconditions have added significance. The coalitional model underlines that beliefs arising from adaptive cognitive processes should show some sensitivity to social context and enable successful social coordination. Delusions lack this social function and adaptability. Clinical psychologists have documented the fixity of delusional beliefs: they are more resistant to change than other types of belief, and are intensely preoccupying, regardless of the social context or interpersonal consequences. In both ‘The Yellow Wallpaper’ and the novel Don Quixote (1605-15) by Miguel de Cervantes, the protagonists’ beliefs about their surroundings are unchangeable and, if anything, become increasingly intense and disruptive. It is this inflexibility to social context, once they take hold, that sets delusions apart from other beliefs.

Across the field of mental health, research showing the importance of the social environment has spurred a great shift in the way that clinicians interact with patients. For example, research exposing the link between trauma and psychosis has resulted in more compassionate, person-centred approaches. The coalitional model of delusions can now contribute to this movement. It opens up promising new avenues of research, which integrate our fundamental social nature and the social function of belief formation. It can also deepen how people experiencing delusions are understood – instead of contributing to stigma by dismissing delusions as irrational, it considers the social conditions that gave rise to such intensely distressing beliefs.

Wednesday, February 13, 2019

The Art of Decision-Making

Joshua Rothman
The New Yorker
Originally published January 21, 2019

Here is an excerpt:

For centuries, philosophers have tried to understand how we make decisions and, by extension, what makes any given decision sound or unsound, rational or irrational. “Decision theory,” the destination on which they’ve converged, has tended to hold that sound decisions flow from values. Faced with a choice—should we major in economics or in art history?—we first ask ourselves what we value, then seek to maximize that value.

From this perspective, a decision is essentially a value-maximizing equation. If you’re going out and can’t decide whether to take an umbrella, you could come to a decision by following a formula that assigns weights to the probability of rain, the pleasure you’ll feel in strolling unencumbered, and the displeasure you’ll feel if you get wet. Most decisions are more complex than this, but the promise of decision theory is that there’s a formula for everything, from launching a raid in Abbottabad to digging an oil well in the North Sea. Plug in your values, and the right choice pops out.

In recent decades, some philosophers have grown dissatisfied with decision theory. They point out that it becomes less useful when we’re unsure what we care about, or when we anticipate that what we care about might shift.

The info is here.

Thursday, May 24, 2018

Is there a universal morality?

Massimo Pigliucci
The Evolution Institute
Originally posted March 2018

Here is the conclusion:

The first bit means that we are all deeply inter-dependent on other people. Despite the fashionable nonsense, especially in the United States, about “self-made men” (they are usually men), there actually is no such thing. Without social bonds and support our lives would be, as Thomas Hobbes famously put it, poor, nasty, brutish, and short. The second bit, the one about intelligence, does not mean that we always, or even often, act rationally. Only that we have the capability to do so. Ethics, then, especially (but not only) for the Stoics becomes a matter of “living according to nature,” meaning not to endorse whatever is natural (that’s an elementary logical fallacy), but rather to take seriously the two pillars of human nature: sociality and reason. As Marcus Aurelius put it, “Do what is necessary, and whatever the reason of a social animal naturally requires, and as it requires.” (Meditations, IV.24)

There is something, of course, the ancients did get wrong: they, especially Aristotle, thought that human nature was the result of a teleological process, that everything has a proper function, determined by the very nature of the cosmos. We don’t believe that anymore, not after Copernicus and especially Darwin. But we do know that human beings are indeed a particular product of complex and ongoing evolutionary processes. These processes do not determine a human essence, but they do shape a statistical cluster of characters that define what it means to be human. That cluster, in turn, constrains — without determining — what sort of behaviors are pro-social and lead to human flourishing, and what sort of behaviors don’t. And ethics is the empirically informed philosophical enterprise that attempts to understand and articulate that distinction.

The information is here.

Friday, November 17, 2017

Going with your gut may mean harsher moral judgments

Jeff Sossamon
www.futurity.org
Originally posted November 2, 2017

Going with your intuition could make you judge others’ moral transgressions more harshly and keep you from changing your mind, even after considering all the facts, a new study suggests.

The findings show that people who strongly rely on intuition automatically condemn actions they perceive to be morally wrong, even if there is no actual harm.

In psychology, intuition, or “gut instinct,” is defined as the ability to understand something immediately, without the need for reasoning.

“It is now widely acknowledged that intuitive processing influences moral judgment,” says Sarah Ward, a doctoral candidate in social and personality psychology at the University of Missouri.

“We thought people who were more likely to trust their intuition would be more likely to condemn things that are shocking, whereas people who don’t rely on gut feelings would not condemn these same actions as strongly,” Ward says.

Ward and Laura King, professor of psychological sciences, had study participants read through a series of scenarios and judge whether the action was wrong, such as an individual giving a gift to a partner that had previously been purchased for an ex.

The article is here.

Tuesday, March 28, 2017

Why We Believe Obvious Untruths

Philip Fernbach & Steven Sloman
The New York Times
Originally published March 3, 2017

'How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.

The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”

Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.

What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.

Wednesday, December 7, 2016

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, Linda J. Skitka
PLOS One
Published: November 16, 2016

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

Friday, October 7, 2016

The Difference Between Rationality and Intelligence

By David Hambrick and Alexander Burgoyne
The New York Times
Originally published September 16, 2016

Here is an excerpt:

Professor Morewedge and colleagues found that the computer training led to statistically large and enduring decreases in decision-making bias. In other words, the subjects were considerably less biased after training, even after two months. The decreases were larger for the subjects who received the computer training than for those who received the video training (though decreases were also sizable for the latter group). While there is scant evidence that any sort of “brain training” has any real-world impact on intelligence, it may well be possible to train people to be more rational in their decision making.

The article is here.