Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cognition. Show all posts
Showing posts with label Cognition. Show all posts

Tuesday, November 21, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Here is my summary:

The authors argue that many different biases, such as the bias blind spot, hostile media bias, egocentric/ethnocentric bias, and outcome bias, can be traced back to the combination of a fundamental prior belief and humans' tendency toward belief-consistent information processing.

Belief-consistent information processing is the process of attending to, interpreting, and remembering information in a way that is consistent with one's existing beliefs. This process can lead to biases when it results in people ignoring or downplaying information that is inconsistent with their beliefs, and giving undue weight to information that is consistent with their beliefs.

The authors propose that different biases can be distinguished by the specific belief that guides information processing. For example, the bias blind spot is characterized by the belief that one is less biased than others, while hostile media bias is characterized by the belief that the media is biased against one's own group. However, the authors also argue that different biases may share the same underlying belief, and differ only in the specific outcome of information processing that is assessed. For example, both the bias blind spot and hostile media bias may involve the belief that one is more objective than others, but the bias blind spot is assessed in the context of self-evaluations, while hostile media bias is assessed in the context of evaluations of others.

The authors' framework has several advantages over existing theoretical explanations of biases. First, it provides a more parsimonious explanation for a wide range of biases. Second, it generates novel hypotheses that can be tested empirically. For example, the authors hypothesize that people who are more likely to believe in one bias will also be more likely to believe in other biases. Third, the framework has implications for interventions to reduce biases. For example, the authors suggest that interventions to reduce biases could focus on helping people to become more aware of their own biases and to develop strategies for resisting the tendency toward belief-consistent information processing.

Friday, September 15, 2023

Older Americans are more vulnerable to prior exposure effects in news evaluation.

Lyons, B. A. (2023). 
Harvard Kennedy School Misinformation Review.

Outline

Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.

Essay Summary
  • I used three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of actual mainstream, hyperpartisan, or false political headlines. Respondents saw a sample of headlines in the first wave and all headlines in the second wave, allowing me to determine if prior exposure increases perceived accuracy differentially across age.  
  • I found that the effect of prior exposure to headlines on perceived accuracy increases with age. The effect increases linearly with age, with the strongest effect for those in the oldest age cohort (60+). These age differences were most pronounced for false news.
  • These findings suggest that repeated exposure can help account for the positive relationship between age and sharing false information online. However, the size of this effect also underscores that other factors (e.g., greater motivation to derogate the out-party) may play a larger role. 
The beginning of the Implications Section

Web-tracking and social media trace data paint a concerning portrait of older news users. Older American adults were much more likely to visit dubious news sites in 2016 and 2020 (Guess, Nyhan, et al., 2020; Moore et al., 2023), and were also more likely to be classified as false news “supersharers” on Twitter, a group who shares the vast majority of dubious news on the platform (Grinberg et al., 2019). Likewise, this age group shares about seven times more links to these domains on Facebook than younger news consumers (Guess et al., 2019; Guess et al., 2021). 

Interestingly, however, older adults appear to be no worse, if not better, at identifying false news stories than younger cohorts when asked in surveys (Brashier & Schacter, 2020). Why might older adults identify false news in surveys but fall for it “in the wild?” There are likely multiple factors at play, ranging from social changes across the lifespan (Brashier & Schacter, 2020) to changing orientations to politics (Lyons et al., 2023) to cognitive declines (e.g., in memory) (Brashier & Schacter, 2020). In this paper, I focus on one potential contributor. Specifically, I tested the notion that differential effects of prior exposure to false news helps account for the disjuncture between older Americans’ performance in survey tasks and their behavior in the wild.

A large body of literature has been dedicated to exploring the magnitude and potential boundary conditions of the illusory truth effect (Hassan & Barber, 2021; Henderson et al., 2021; Pillai & Fazio, 2021)—a phenomenon in which false statements or news headlines (De keersmaecker et al., 2020; Pennycook et al., 2018) come to be believed over multiple exposures. Might this effect increase with age? As detailed by Brashier and Schacter (2020), cognitive deficits are often blamed for older news users’ behaviors. This may be because cognitive abilities are strongest in young adulthood and slowly decline beyond that point (Salthouse, 2009), resulting in increasingly effortful cognition (Hess et al., 2016). As this process unfolds, older adults may be more likely to fall back on heuristics when judging the veracity of news items (Brashier & Marsh, 2020). Repetition, the source of the illusory truth effect, is one heuristic that may be relied upon in such a scenario. This is because repeated messages feel easier to process and thus are seen as truer than unfamiliar ones (Unkelbach et al., 2019).

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Sunday, February 5, 2023

I’m a psychology expert in Finland, the No. 1 happiest country in the world—here are 3 things we never do

Frank Martela
CNBC.com
Originally posted 5 Jan 23

For five years in a row, Finland has ranked No. 1 as the happiest country in the world, according to the World Happiness Report. 

In 2022′s report, people in 156 countries were asked to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0.” It also looks at factors that contribute to social support, life expectancy, generosity and absence of corruption.

As a Finnish philosopher and psychology researcher who studies the fundamentals of happiness, I’m often asked: What exactly makes people in Finland so exceptionally satisfied with their lives?

To maintain a high quality of life, here are three things we never do:

1. We don’t compare ourselves to our neighbors.

Focus more on what makes you happy and less on looking successful. The first step to true happiness is to set your own standards, instead of comparing yourself to others.

2. We don’t overlook the benefits of nature.

Spending time in nature increases our vitality, well-being and a gives us a sense of personal growth. Find ways to add some greenery to your life, even if it’s just buying a few plants for your home.

3. We don’t break the community circle of trust.

Think about how you can show up for your community. How can you create more trust? How can you support policies that build upon that trust? Small acts like opening doors for strangers or giving up a seat on the train makes a difference, too.

Thursday, January 19, 2023

Things could be better

Mastroianni, A., & Ludwin-Peery, E. 
(2022, November 14). 
https://doi.org/10.31234/osf.io/2uxwk

Abstract

Eight studies document what may be a fundamental and universal bias in human imagination: people think things could be better. When we ask people how things could be different, they imagine how things could be better (Study 1). The bias doesn't depend on the wording of the question (Studies 2 and 3). It arises in people's everyday thoughts (Study 4). It is unrelated to people's anxiety, depression, and neuroticism (Study 5). A sample of Polish people responding in English show the same bias (Study 6), as do a sample of Chinese people responding in Mandarin (Study 7). People imagine how things could be better even though it's easier to come up with ways things could be worse (Study 8). Overall, it seems, human imagination has a bias: when people imagine how things could be, they imagine how things could be better.

(cut)

Why Does Human Imagination Work Like This?

Honestly, who knows. Brains are weird, man.

When all else fails, we can always turn to natural selection: maybe this bias helped our ancestors survive. Hungry, rain-soaked hunter-gatherers imagined food in their bellies and roofs over their heads and invented agriculture and architecture. Once warm and full, they out-reproduced their brethren who were busy imagining how much hungrier and wetter they could be.

But really, this is a mystery. We may have uncovered something fundamental about how human imagination works, but it might be a long time before we understand it.

Perhaps This is Why You Can Never Be Happy

Everybody knows about the hedonic treadmill: once you’re moderately happy, it’s hard to get happier. But nobody has ever really explained why this happens. People say things like, “oh, you get used to good things,” but that’s just a description, not an explanation. Why do people get used to good things?

Now we might have an answer: people get used to good things because they’re always imagining how things could be better. So even if things get better, you might not feel better. When you live in a cramped apartment, you dream of getting a house. When you get a house, you dream of a second house. Or you dream of lower property taxes. Or a hot tub. Or two hot tubs. And so on, forever.

Saturday, November 12, 2022

Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information

Litovsky, Y. Loewenstein, G. et al.
PNAS, Vol. 119 | No. 34
August 23, 2022

Abstract

We often talk about interacting with information as we would with a physical good (e.g., “consuming content”) and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., “holding on to” or “letting go of” our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory—loss aversion and different risk preferences for gains versus losses—also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of “owning” it. Study 3 provides a conceptual replication of the classic “Asian Disease” gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.

Significance

We build on Abelson and Prentice’s conjecture that beliefs are not merely valued as guides to interacting with the world, but as cherished possessions. Extending this idea to information, we show that three key phenomena which characterize the valuation of money and material goods—loss aversion, the endowment effect, and the gain-loss framing effect—also apply to noninstrumental information. We discuss, more generally, how the analogy between noninstrumental information and material goods can help make sense of the complex ways in which people deal with the huge expansion of available information in the digital age.

From the Discussion

Economists have traditionally treated the value of information as derivative of its consequences for decision-making. While prior research on noninstrumental information has shown that this narrow view of information may be incomplete, only a few accounts have attempted to explain intrinsic preferences for information. One such account argues that people seek (or avoid) information inasmuch as doing so helps them maintain their cherished beliefs. Another proposes that people choose which information to seek or avoid by considering how it will impact their actions, affect, and cognition. Yet, outside of the curiosity literature, no existing account of information valuation considers preferences for information that has neither instrumental nor (concrete) hedonic value. By showing that key features of Prospect Theory’s value function also apply to individuals’ valuation of (even noninstrumental) information, the current paper suggests that we may also value information in some of the same fundamental ways that we value physical goods.

Friday, May 13, 2022

How Other- and Self-Compassion Reduce Burnout through Resource Replenishment

Kira Schabram and Yu Tse Heng
Academy of Management Journal, Vol. 65, No. 2

Abstract

The average employee feels burnt out, a multidimensional state of depletion likely to persist without intervention. In this paper, we consider compassion as an agentic action by which employees may replenish their own depleted resources and thereby recover. We draw on conservation of resources theory to examine the resource-generating power of two distinct expressions of compassion (self- and other-directed) on three dimensions of burnout (exhaustion, cynicism, inefficacy). Utilizing two complementary designs—a longitudinal field survey of 130 social service providers and an experiential sampling methodology with 100 business students across 10 days—we find a complex pattern of results indicating that both compassion expressions have the potential to generate salutogenic resources (self-control, belonging, self-esteem) that replenish different dimensions of burnout. Specifically, self-compassion remedies exhaustion and other-compassion remedies cynicism—directly or indirectly through resources—while the effects of self- and other-compassion on inefficacy vary. Our key takeaway is that compassion can indeed contribute to human sustainability in organizations, but only when the type of compassion provided generates resources that fit the idiosyncratic experience of burnout.

From the Discussion Section

Our work suggests a more immediate benefit, namely that giving compassion can serve an important resource generative function for the self. Indeed, in neither of our studies did we find either compassion expression to ever have a deleterious effect. While this is in line with the broader literature on self-compassion (Neff, 2011), it is somewhat surprising when it comes to other-compassion. Hobfoll (1989) speculated that when people find themselves depleted, giving support to others should sap them further and such personal costs have been identified in previously cited research on prosocial gestures (Bolino & Grant, 2016; Lanaj et al., 2016; Uy et al., 2017). Why then did other-compassion serve a singularly restorative function? As we noted in our literature review, compassion is distinguished among the family of prosocial behaviors by its principal attendance to human needs (Tsui, 2013) rather than organizational effectiveness, and this may offer an explanation. Perhaps, there is something fundamentally more beneficial for actors about engaging in acts of kindness and care (e.g. taking someone who is having a hard time out for coffee) than in providing instrumental support (e.g. exerting oneself to provide a friendly review). We further note that our study also did not find any evidence of ‘compassion fatigue’ (Figley, 2013), identified frequently by practitioners among the social service employees that comprised our first sample. In line with the ‘desperation corollary’ of COR (Hobfoll et al., 2018), which suggests that individuals can reach a state of extreme depletion characterized by maladaptive coping, it may be that there exists a tipping point after which compassion ceases to offer benefits. If there is, however, it must be quite high to not have registered in either the longitudinal or diary designs. 

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Sunday, January 23, 2022

Free will beliefs are better predicted by dualism than determinism beliefs across different cultures

Wisniewski D, Deutschländer R, Haynes J-D 
(2019) PLoS ONE 14(9): e0221617. 
https://doi.org/10.1371/journal.pone.0221617

Abstract

Most people believe in free will. Whether this belief is warranted or not, free will beliefs (FWB) are foundational for many legal systems and reducing FWB has effects on behavior from the motor to the social level. This raises the important question as to which specific FWB people hold. There are many different ways to conceptualize free will, and some might see physical determinism as a threat that might reduce FWB, while others might not. Here, we investigate lay FWB in a large, representative, replicated online survey study in the US and Singapore (n = 1800), assessing differences in FWB with unprecedented depth within and between cultures. Specifically, we assess the relation of FWB, as measured using the Free Will Inventory, to determinism, dualism and related concepts like libertarianism and compatibilism. We find that libertarian, compatibilist, and dualist, intuitions were related to FWB, but that these intuitions were often logically inconsistent. Importantly, direct comparisons suggest that dualism was more predictive of FWB than other intuitions. Thus, believing in free will goes hand-in-hand with a belief in a non-physical mind. Highlighting the importance of dualism for FWB impacts academic debates on free will, which currently largely focus on its relation to determinism. Our findings also shed light on how recent (neuro)scientific findings might impact FWB. Demonstrating physical determinism in the brain need not have a strong impact on FWB, due to a wide-spread belief in dualism.

Conclusion

We have shown that free will beliefs in the general public are most closely related to a strong belief in dualism. This was true in different cultures, age groups, and levels of education. As noted in the beginning, recent neuroscientific findings have been taken to suggest that our choices might originate from unconscious brain activity, but see, which has led some to predict an erosion of free will beliefs with potentially serious consequences for our sense of responsibility and even the criminal justice system. However, even if neuroscience were to fully describe and explain the causal chain of processes in the physical brain, this need not lead to an erosion of free will beliefs in the general public. Although some might indeed see this as a threat to free will (US citizens with low dualism beliefs), most will not likely because of a wide-spread belief in dualism (see also [21]). Our findings also highlight the need for cross-cultural examinations of free will beliefs and related constructs, as previous findings from (mostly undergraduate) US samples do not fully generalize to other cultures.

Wednesday, January 19, 2022

On the Harm of Imposing Risk of Harm.

Maheshwari, K. (2021)
Ethic Theory Moral Prac 24, 965–980

Abstract

What is wrong with imposing pure risks, that is, risks that don’t materialize into harm? According to a popular response, imposing pure risks is pro tanto wrong, when and because risk itself is harmful. Call this the Harm View. Defenders of this view make one of the following two claims. On the Constitutive Claim, pure risk imposition is pro tanto wrong when and because risk constitutes diminishing one’s well-being viz. preference-frustration or setting-back their legitimate interest in autonomy. On the Contingent Claim, pure risk imposition is pro tanto wrong when and because risk has harmful consequences for the risk-bearers, such as psychological distress. This paper argues that the Harm View is plausible only on the Contingent Claim, but fails on the Constitutive Claim. In discussing the latter, I argue that both the preference and autonomy account fail to show that risk itself is constitutively harmful and thereby wrong. In discussing the former, I argue that risk itself is contingently harmful and thereby wrong but only in a narrow range of cases. I conclude that while the Harm View can sometimes explain the wrong of imposing risk when (and because) risk itself is contingently harmful, it is unsuccessful as a general, exhaustive account of what makes pure imposition wrong.

Conclusions

In this paper, I have engaged in a detailed discussion of a prominent view in the ethics of risk imposition, namely the Harm View. I’ve argued that the Harm View is plausible only on the Contingent Claim, but fails on the Constitutive Claim. In discussing the Constitutive Claim, I’ve argued that the preference and autonomy accounts as construed by Finkelstein (2003) and Oberdiek (2017), respectively fail to show that risk itself is constitutively harmful, and thereby wrong. In vindicating the idea that risk itself is constitutively harmful, both accounts are found guilty of either trivializing or undermining the moral significance of risk, or admit to having counter-intuitive implications in cases where risks materialize. In discussing the Contingent Claim, I’ve argued that risk itself is contingently harmful and thereby wrong only in a narrow range of cases. This makes the Harm View explanatorily limited in scope, thereby undermining its plausibility as a general, exhaustive account of what makes pure imposition wrong.

Sunday, December 5, 2021

The psychological foundations of reputation-based cooperation

Manrique, H., et al. (2021, June 2).
https://doi.org/10.1098/rstb.2020.0287

Abstract

Humans care about having a positive reputation, which may prompt them to help in scenarios where the return benefits are not obvious. Various game-theoretical models support the hypothesis that concern for reputation may stabilize cooperation beyond kin, pairs or small groups. However, such models are not explicit about the underlying psychological mechanisms that support reputation-based cooperation. These models therefore cannot account for the apparent rarity of reputation-based cooperation in other species. Here we identify the cognitive mechanisms that may support reputation-based cooperation in the absence of language. We argue that a large working memory enhances the ability to delay gratification, to understand others' mental states (which allows for perspective-taking and attribution of intentions), and to create and follow norms, which are key building blocks for increasingly complex reputation-based cooperation. We review the existing evidence for the appearance of these processes during human ontogeny as well as their presence in non-human apes and other vertebrates. Based on this review, we predict that most non-human species are cognitively constrained to show only simple forms of reputation-based cooperation.

Discussion

We have presented  four basic psychological building blocks that we consider important facilitators for complex reputation-based cooperation: working memory, delay of gratification, theory of mind, and social norms. Working memory allows for parallel processing of diverse information, to  properly  assess  others’ actions and update their  reputation  scores. Delay of gratification is useful for many types of cooperation,  but may  be particularly relevant for reputation-based cooperation where the returns come from a future interaction with an observer rather than an immediate reciprocation by one’s current partner. Theory of mind makes it easier to  properly  assess others’ actions, and  reduces the  risk that spreading  errors will undermine cooperation. Finally, norms support theory of mind by giving individuals a benchmark of what is right or wrong.  The more developed that each of these building blocks is, the more complex the interaction structure can become. We are aware that by picking these four socio-cognitive mechanisms we leave out other processes that might be involved, e.g. long-term memory, yet we think the ones we picked are more critical and better allow for comparison across species.

Wednesday, November 17, 2021

False Polarization: Cognitive Mechanisms and Potential Solutions

Fernbach PM, Van Boven L
Current Opinion in Psychology
https://doi.org/10.1016/j.copsyc.2021.06.005

Abstract

While political polarization in the United States is real, intense and increasing, partisans consistently overestimate its magnitude. This “false polarization” is insidious because it reinforces actual polarization and inhibits compromise. We review empirical research on false polarization and the related phenomenon of negative meta-perceptions, and we propose three cognitive and affective processes that likely contribute to these phenomena: categorical thinking, oversimplification and emotional amplification. Finally, we review several interventions that have shown promise in mitigating these biases. 

From the Solutions Section

Another idea is to encourage citizens to engage in deeper discourse about the issues than is the norm. One way to do this is through a “consensus conference,” where people on opposing sides of issues are brought together along with topic experts to learn and discuss over the course of hours or days, with the goal of coming to an agreement. The depth of analysis cuts against the tendency to oversimplify, and the face-to-face nature diminishes categorical thinking by highlighting individuality. The challenge of consensus conferences is scalability. They are resource intensive. However, a recent study showed that simply telling people about the outcome of a consensus conference can yield some of the beneficial effects.

The amplifying effects of anger can be targeted by emotional reappraisal through the lens of sadness; People who were induced to states of sadness rather than anger exhibited lower polarization and false polarization in the context of Hurricane Katrina and a mass shooting. In another study, induced sadness increased people’s willingness to negotiate and their openness to opponents’ perspectives. Sadness reappraisals are feasible in many challenging contexts involving threat to health and security, such as the COVID-19 pandemic, that are readily interpreted as saddening or angering.

Monday, November 8, 2021

What the mind is

B. F. Malle
Nature - Human Behaviour
Originally published 26 Aug 21

Humans have a conception of what the mind is. This conception takes mind to be a set of capacities, such as the ability to be proud or feel sad, to remember or to plan. Such a multifaceted conception allows people to ascribe mind in varying degrees to humans, animals, robots or any other entity1,2. However, systematic research on this conception of mind has so far been limited to Western populations. A study by Weisman and colleagues3 published in Nature Human Behaviour now provides compelling evidence for some cross-cultural universals in the human understanding of what the mind is, as well as revealing intercultural variation.

(cut)

As with all new findings, readers must be alert and cautious in the conclusions they draw. We may not conclude with certainty that these are the three definitive dimensions of human mind perception, because the 23 mental capacities featured in the study were not exhaustive; in particular, they did not encompass two important domains — morality and social cognition. Moral capacities are central to social relations, person perception and identity; likewise, people care deeply about the capacity to empathize and understand others’ thoughts and feelings. Yet the present study lacked items to capture these domains. When items for moral and social–cognitive capacities have been included in past US studies, they formed a strong separate dimension, while emotions shifted toward the Experience dimension. 

Incorporating moral–social capacities in future studies may strengthen the authors’ findings. Morality and social cognition are credible candidates for cultural universals, so their inclusion could make cross-cultural stability of mind perception even more decisive. Moreover, inclusion of these important mental capacities might clarify one noteworthy cultural divergence in the data: the fact that adults in Ghana and Vanuatu combined the emotional and perceptual-cognitive dimensions. Without the contrast to social–moral capacities, emotion and cognition might have been similar enough to move toward each other. Including social–moral capacities in future studies could provide a contrasting and dividing line, which would pull emotion and cognition apart. The results might, potentially, be even more consistent across cultures.

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021

Abstract

Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Sunday, October 17, 2021

The Cognitive Science of Technology

D. Stout
Trends in Cognitive Sciences
Available online 4 August 2021

Abstract

Technology is central to human life but hard to define and study. This review synthesizes advances in fields from anthropology to evolutionary biology and neuroscience to propose an interdisciplinary cognitive science of technology. The foundation of this effort is an evolutionarily motivated definition of technology that highlights three key features: material production, social collaboration, and cultural reproduction. This broad scope respects the complexity of the subject but poses a challenge for theoretical unification. Addressing this challenge requires a comparative approach to reduce the diversity of real-world technological cognition to a smaller number of recurring processes and relationships. To this end, a synthetic perceptual-motor hypothesis (PMH) for the evolutionary–developmental–cultural construction of technological cognition is advanced as an initial target for investigation.

Highlights
  • Evolutionary theory and paleoanthropological/archaeological evidence motivate a theoretical definition of technology as socially reproduced and elaborated behavior involving the manipulation and modification of objects to enact changes in the physical environment.
  • This definition helps to resolve or obviate ongoing controversies in the anthropological, neuroscientific, and psychological literature relevant to technology.
  • A review of evidence from across these disciplines reveals that real-world technologies are diverse in detail but unified by the underlying demands and dynamics of material production. This creates opportunities for meaningful synthesis using a comparative method.
  • A ‘perceptual‐motor hypothesis’ proposes that technological cognition is constructed on biocultural evolutionary and developmental time scales from ancient primate systems for sensorimotor prediction and control.

Wednesday, September 15, 2021

Why Is It So Hard to Be Rational?

Joshua Rothman
The New Yorker
Originally published 16 Aug 21

Here is an excerpt:

Knowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge. Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps. The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes, an eighteenth-century mathematician and minister. So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.

There are many ways to explain Bayesian reasoning—doctors learn it one way and statisticians another—but the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preexisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.

Consider the example of a patient who has tested positive for breast cancer—a textbook case used by Pinker and many other rationalists. The stipulated facts are simple. The prevalence of breast cancer in the population of women—the “base rate”—is one per cent. When breast cancer is present, the test detects it ninety per cent of the time. The test also has a false-positive rate of nine per cent: that is, nine per cent of the time it delivers a positive result when it shouldn’t. Now, suppose that a woman tests positive. What are the chances that she has cancer?

When actual doctors answer this question, Pinker reports, many say that the woman has a ninety-per-cent chance of having it. In fact, she has about a nine-per-cent chance. The doctors have the answer wrong because they are putting too much weight on the new information (the test results) and not enough on what they knew before the results came in—the fact that breast cancer is a fairly infrequent occurrence. To see this intuitively, it helps to shuffle the order of your facts, so that the new information doesn’t have pride of place. Start by imagining that we’ve tested a group of a thousand women: ten will have breast cancer, and nine will receive positive test results. Of the nine hundred and ninety women who are cancer-free, eighty-nine will receive false positives. Now you can allow yourself to focus on the one woman who has tested positive. To calculate her chances of getting a true positive, we divide the number of positive tests that actually indicate cancer (nine) by the total number of positive tests (ninety-eight). That gives us about nine per cent.

Tuesday, August 10, 2021

The irrationality of transhumanists

Susan B. Levin
iai.tv Issue 9
Originally posted 11 Jan 21

Bioenhancement is among the hottest topics in bioethics today. The most contentious area of debate here is advocacy of “radical” enhancement (aka transhumanism). Because transhumanists urge us to categorically heighten select capacities, above all, rationality, it would be incorrect to say that the possessors of these abilities were human beings: to signal, unmistakably, the transcendent status of these beings, transhumanists call them “posthuman,” “godlike,” and “divine.” For many, the idea of humanity’s technological self-transcendence has a strong initial appeal; that appeal, intensified by transhumanists’ relentless confidence that radical bioenhancement will occur if only we commit adequate resources to the endeavor, yields a viscerally potent combination. On this of all topics, however, we should not let ourselves be ruled by viscera. 

Transhumanists present themselves as the sole rational parties to the debate over radical bioenhancement: merely questioning a dedication to skyrocketing rational capacity or lifespan testifies to one’s irrationality. Scientifically, for this charge of irrationality not to be intellectually perverse, the evidence on transhumanists’ side would have to be overwhelming.

(cut)

Transhumanists are committed to extreme rational essentialism: they treasure the limitless augmentation of rational capacity, treating affect as irrelevant or targeting it (at minimum, the so-called negative variety) for elimination. Further disrupting transhumanists’ fixation with radical cognitive bioenhancement, therefore, is the finding that pharmacological boosts, such as they are, may not be entirely or even mainly cognitive. Motivation may be strengthened, with resulting boosts to subjects’ informational facility. What’s more, being in a “positive” (i.e., happy) mood can impair cognitive performance, while being in a “negative” (i.e., sad) one can strengthen it by, for instance, making subjects more disposed to reject stereotypes. 

Wednesday, June 30, 2021

Extortion, intuition, and the dark side of reciprocity

Bernhard, R., & Cushman, F. A. 
(2021, April 22). 
https://doi.org/10.31234/osf.io/kycwa

Abstract

Extortion occurs when one person uses some combination of threats and promises to extract an unfair share of benefits from another. Although extortion is a pervasive feature of human interaction, it has received relatively little attention in psychological research. To this end, we begin by observing that extortion is structured quite similarly to far better-studied “reciprocal” social behaviors, such as conditional cooperation and retributive punishment. All of these strategies are designed to elicit some desirable behavior from a social partner, and do so by constructing conditional incentives; the main difference is that the desired behavioral response is an unfair or unjust allocation of resources during extortion, whereas it is often a fair or just distribution of resources for reciprocal cooperation and punishment. Thus, we conjecture, a common set of psychological mechanisms may render these strategies successful. We know from prior work that prosocial forms of reciprocity often work best when implemented inflexibly and intuitively, rather than deliberatively. This both affords long-term commitment to the reciprocal strategy, and also signals this commitment to social partners. We argue that, for the same reasons, extortion is likely to depend largely upon inflexible, intuitive psychological processes. Several existing lines of circumstantial evidence support this conjecture.

From the Conclusion

An essential part of our analysis is to characterize strategies, rather than individual behaviors, as “prosocial” or “antisocial”.  Extortionate strategies can be  implemented by behaviors that “help” (as  in  the case of a manager who gives promotions to those who work uncompensated hours), while prosocial strategies can be implemented by behaviors that harm (as in the case of the CEO who finds out and reprimands this manager).   This manner of thinking at the level of strategies, rather than behavior, invites a broader realignment of our perspective on the relationship between intuition and social behavior. If our focus were on individual behaviors, we might have posed  the question, “Does intuition support cooperation or defection?”.  Framed  this way,  the recent literature could be taken to suggest the answer is “cooperation”—and,  therefore, that intuition promotes prosociality. Surely this is often true, but we suggest that intuitive cooperation can also serve antisocial ends. Meanwhile, as we have emphasized, a prosocial strategy such as TFT  may  benefit  from intuitive (reciprocal) defection. Quickly, the question, “Does intuition support cooperation or defection?”—and  any  implied  relationship to the question  “Does intuition support prosocial or antisocial behavior?”—begins to look ill-posed.