Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Prediction Error. Show all posts
Showing posts with label Prediction Error. Show all posts

Saturday, August 12, 2023

Teleological thinking is driven by aberrant associations

Corlett, P. R. (2023, June 17).
PsyArXiv preprints
https://doi.org/10.31234/osf.io/wgyqs

Abstract

Teleological thought — the tendency to ascribe purpose to objects and events — is useful in some cases (encouraging explanation-seeking), but harmful in others (fueling delusions and conspiracy theories). What drives maladaptive teleological thinking? A fundamental distinction in how we learn causal relationships between events is whether it can be best explained via associations versus via propositional thought. Here, we propose that directly contrasting the contributions of these two pathways can elucidate where teleological thinking goes wrong. We modified a causal learning task such that we could encourage one pathway over another in different instances. Across experiments (total N=600), teleological tendencies were correlated with delusion-like ideas and uniquely explained by aberrant associative learning, but not by learning via propositional rules. Computational modeling suggested that the relationship between associative learning and teleological thinking can be explained by spurious prediction errors that imbue random events with more significance — providing a new understanding for how humans make meaning of lived events.

From the Discussion section

Teleological thinking, in previous work, has been defined in terms of “beliefs”, “social-cognitive biases”, and indeed carries “reasoning” in its very name (as it is used interchangeably with teleological or ‘purpose-based’ reasoning)—which is why it might be surprising to learn of the relationship between teleological thinking and low-level associative learning, and not learning via propositional reasoning.  The key result across experiments can be summarized as such: aberrant prediction errors augured weaker non-additive blocking, which predicted tendencies to engage in teleological thinking, which was consistently correlated with distress from delusional thinking.  This pattern of results was demonstrated in both behavioral and computational modeling data, and withstood even more conservative regression models, 

accounting for the variance explained by other variables.  In other words, the same people who learn more from irrelevant cues or overpredict relationships in the non-additive blocking task (by predicting that cues [that should have been“blocked”] might also cause allergic reactions) tend to also ascribe more purpose to random events —and to experience more distress from delusional beliefs (and thus hold their delusional beliefs in a more patient-like way).


Some thoughts:

The saying "Life is a projective test" suggests that we all see the world through our own unique lens, shaped by our experiences, beliefs, and values. This lens (read as biases) can cause us to make aberrant associations, or to see patterns and connections that are not actually there.

The authors of the paper found that people who are more likely to engage in teleological thinking are also more likely to make aberrant associations. This suggests that our tendency to see the world in a teleological way may be driven by our own biases and assumptions.

In other words, the way we see the world is not always accurate or objective. It is shaped by our own personal experiences and perspectives. This can lead us to make mistakes, or to see things that are not really there.

The next time you are trying to make sense of something, it is important to be aware of your own biases and assumptions, which may help make better choices. 

Thursday, November 14, 2019

Cooperation and Learning in Unfamiliar Situations

McAuliffe, W. H. B., Burton-Chellew, M. N., &
McCullough, M. E. (2019).
Current Directions in Psychological Science, 
28(5), 436–440. https://doi.org/10.1177/0963721419848673

Abstract

Human social life is rife with uncertainty. In any given encounter, one can wonder whether cooperation will generate future benefits. Many people appear to resolve this dilemma by initially cooperating, perhaps because (a) encounters in everyday life often have future consequences, and (b) the costs of alienating oneself from long-term social partners often outweighed the short-term benefits of acting selfishly over our evolutionary history. However, because cooperating with other people does not always advance self-interest, people might also learn to withhold cooperation in certain situations. Here, we review evidence for two ideas: that people (a) initially cooperate or not depending on the incentives that are typically available in their daily lives and (b) also learn through experience to adjust their cooperation on the basis of the incentives of unfamiliar situations. We compare these claims with the widespread view that anonymously helping strangers in laboratory settings is motivated by altruistic desires. We conclude that the evidence is more consistent with the idea that people stop cooperating in unfamiliar situations because they learn that it does not help them, either financially or through social approval.

Conclusion

Experimental economists have long emphasized the role of learning in social decision-making (e.g., Binmore, 1999). However, cooperation researchers have only recently considered how peoples’ past social interactions shape their expectations in novel social situations. An important lesson from the research reviewed here is that people’s behavior in any single situation is not necessarily a direct read-out of how selfish or altruistic they are, especially if the situation’s incentives differ from what they normally encounter in everyday life.