Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Theory of Mind. Show all posts
Showing posts with label Theory of Mind. Show all posts

Saturday, June 3, 2023

The illusion of the mind–body divide is attenuated in males.

Berent, I. 
Sci Rep 13, 6653 (2023).
https://doi.org/10.1038/s41598-023-33079-1

Abstract

A large literature suggests that people are intuitive Dualists—they tend to perceive the mind as ethereal, distinct from the body. Here, we ask whether Dualism emanates from within the human psyche, guided, in part, by theory of mind (ToM). Past research has shown that males are poorer mind-readers than females. If ToM begets Dualism, then males should exhibit weaker Dualism, and instead, lean towards Physicalism (i.e., they should view bodies and minds alike). Experiments 1–2 show that males indeed perceive the psyche as more embodied—as more likely to emerge in a replica of one’s body, and less likely to persist in its absence (after life). Experiment 3 further shows that males are less inclined towards Empiricism—a putative byproduct of Dualism. A final analysis confirms that males’ ToM scores are lower, and ToM scores further correlate with embodiment intuitions (in Experiments 1–2). These observations (from Western participants) cannot establish universality, but the association of Dualism with ToM suggests its roots are psychological. Thus, the illusory mind–body divide may arise from the very workings of the human mind.

Discussion

People tend to consider the mind as ethereal, distinct from the body. This intuitive Dualist stance has been demonstrated in adults and children, in Western and non-Western participants and its consequences on reasoning are widespread.

Why people are putative Dualists, however, is unclear. In particular, one wonders whether Dualism arises only by cultural transmission, or whether the illusion of the mind–body divide can also emerge naturally, from ToM.

To address this question, here, we investigated whether individual differences in ToM capacities, occurring within the neurotypical population—between males and females—are linked to Dualism. Experiments 1–2 show that this is indeed the case.

Males, in this sample, considered the psyche as more strongly embodied than females: they believed that epistemic states are more likely to emerge in a replica of one’s body (in Experiment 1) and that psychological traits are less likely to persist upon the body’s demise, in the afterlife (in Experiment 2). Experiment 3 further showed that males are also more likely to consider psychological traits as innate—this is expected by past findings, suggesting that Dualism begets Empiricism.

A follow-up analysis has confirmed that these differences in reasoning about bodies and minds are linked to ToM. Not only did males in this sample score lower than females on ToM, but their ToM scores correlated with their Dualist intuitions.

As noted, these results ought to be interpreted with caution, as the gender differences observed here may not hold universally, and it certainly does not speak to the reasoning of any individual person. And indeed, ToM abilities demonstrably depend on multiple factors, including linguistic experience and culture. But inasmuch as females show superior ToM, they ought to lean towards Dualism and Empiricism. Dualism, then, is linked to ToM.

Wednesday, July 20, 2022

Knowledge before belief

Phillips, J., Buckwalter, W. et al. (2021)
Behavioral and Brain Sciences, 44, E140.
doi:10.1017/S0140525X20000618

Abstract

Research on the capacity to understand others' minds has tended to focus on representations of beliefs, which are widely taken to be among the most central and basic theory of mind representations. Representations of knowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one does not even represent them as believing it? Drawing on a wide range of methods across cognitive science, we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that nonhuman primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge (but not belief) attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibits a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledge representation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind – one that is focused on understanding others' minds in relation to the actual world, rather than independent from it.

From the last section

Learning from others, cultural evolution, and what is special about humans

A capacity for reliably learning from others is critically important not only within a single lifespan, but also across them—at the level of human societies. Indeed, this capacity to reliably learn from others has been argued to be essential for human’s unique success in the accumulation and transmission of cultural knowledge (e.g., Henrich, 2015; Heyes, 2018). Perhaps unsurprisingly, the argument we’ve made about the primary role of knowledge representations in cognition fits nicely with this broad view of why humans have been so successful: it is likely supported by our comparatively basic theory of mind representations.

At the same time, this suggestion cuts against another common proposal for which ability underwrites the wide array of ways in which humans have been uniquely successful, namely their ability to represent others’ beliefs (Baron-Cohen, 1999; Call & Tomasello, 2008; Pagel, 2012; Povinelli & Preuss, 1995; Tomasello 1999; Tomasello, et al., 1993). While the ability to represent others’ beliefs may indeed turn out to be unique to humans and critically important for some purposes, it does not seem to underwrite humans’ capacity for the accumulation of cultural knowledge. After all, precisely at the time in human development when the vast majority of critical learning occurs (infancy and early childhood), we find robust evidence for a capacity for knowledge rather than belief representation (§4.2).

Saturday, January 29, 2022

Are some cultures more mind-minded in their moral judgements than others?

Barrett H. Clark and Saxe Rebecca R.
2021. Phil. Trans. R. Soc. B3762020028820200288

Abstract

Cross-cultural research on moral reasoning has brought to the fore the question of whether moral judgements always turn on inferences about the mental states of others. Formal legal systems for assigning blame and punishment typically make fine-grained distinctions about mental states, as illustrated by the concept of mens rea, and experimental studies in the USA and elsewhere suggest everyday moral judgements also make use of such distinctions. On the other hand, anthropologists have suggested that some societies have a morality that is disregarding of mental states, and have marshalled ethnographic and experimental evidence in support of this claim. Here, we argue against the claim that some societies are simply less ‘mind-minded’ than others about morality. In place of this cultural main effects hypothesis about the role of mindreading in morality, we propose a contextual variability view in which the role of mental states in moral judgement depends on the context and the reasons for judgement. On this view, which mental states are or are not relevant for a judgement is context-specific, and what appear to be cultural main effects are better explained by culture-by-context interactions.

(cut)

Summing up: Mind-mindedness in context

Our critique of cultural main effects theories, we think, is likely to apply to many domains, not just moral judgement. Dimensions of cultural difference such as the “collectivist / individualist” dimension [50]may capture some small main effects of cultural difference, but we suspect that collectivism / individualism is a parameter than can be flipped contextually within societies to a much greater degree than it varies as a main effect across societies. We may be collectivist within families, for example, but individualist at work. Similarly, we suggest that everywhere there are contexts in which one’s mental states may be deemed morally irrelevant, and others where they aren’t. Such judgements vary not just across contexts, but across individuals and time. What we argue against, then, is thinking of mindreading as a resource that is scarce in some places and plentiful in others. Instead, we should think about it as a resource that is available everywhere, and whose use in moral judgement depends on a multiplicity of factors, including social norms but also, importantly, the reasons for which people are making judgements. Cognitive resources such as theory of mind might best be seen as ingredients that can be combined in different ways across people, places, and situations. On this view, the space of moral judgements represents a mosaic of variously combined ingredients.

Tuesday, January 11, 2022

Are some cultures more mind-minded in their moral judgements than others?

Barrett HC, Saxe RR. (2021)
Phil. Trans. R. Soc. B 376: 20200288.

Abstract

Cross-cultural research on moral reasoning has brought to the fore the question of whether moral judgements always turn on inferences about the mental states of others. Formal legal systems for assigning blame and punishment typically make fine-grained distinctions about mental states, as illustrated by the concept of mens rea, and experimental studies in the USA and elsewhere suggest everyday moral judgements also make use of such distinctions. On the other hand, anthropologists have suggested that some societies have a morality that is disregarding of mental states, and have marshalled ethnographic and experimental evidence in support of this claim. Here, we argue against the claim that some societies are simply less ‘mind-minded’ than others about morality. In place of this cultural main effects hypothesis about the role of mindreading in morality, we propose a contextual variability view in which the role of mental states in moral judgement depends on the context and the reasons for judgement. On this view, which mental states are or are not relevant for a judgement is context-specific, and what appear to be cultural main effects are better explained by culture-by-context interactions.

From the Summing Up section

Our critique of CME theories, we think, is likely to apply to many domains, not just moral judgement. Dimensions of cultural difference such as the ‘collectivist/individualist’ dimension may capture some small main effects of cultural difference, but we suspect that collectivism/individualism is a parameter that can be flipped contextually within societies to a much greater degree than it varies as a main effect across societies. We may be collectivists within families, for example, but individualists at work. Similarly, we suggest that everywhere there are contexts in which one’s mental states may be deemed morally irrelevant and others where they are not. Such judgements vary not just across contexts, but across individuals and time.

What we argue against, then, is thinking of mindreading as a resource that is scarce in some places and plentiful in others. Instead, we should think about it as a resource that is available everywhere, and whose use in moral judgement depends on a multiplicity of factors, including social norms but also, importantly, the reasons for which people are making judgements.

Friday, January 7, 2022

Moral Appraisals Guide Intuitive Legal Determinations

B. Flanagan, G.F.C.F. de Almeida, et al.
researchgate.net

Abstract 

Socialization demands the capacity to observe a plethora of private, legal, and institutional rules.  To accomplish this,  individuals must grasp rules’ meaning and infer the class of conduct each proscribes.  Yet this basic account neglects important nuance in the way we reason about complex cases in which a rule’s literal or textualist interpretation conflicts with deeper values.  In six studies (total N = 2541), we examined legal determinations through the lens of these cases.  We found that moral appraisals—of the  rule’s value (Study  1) and the agent’s character (Studies 2-3)—shaped people’s application of rules, driving counter-literal legal determinations. These effects were stronger under time pressure and were weakened by the opportunity to reflect (Study  4). Our final studies explored the role of theory of mind: Textualist judgments arose when agents were described as cognizant of the rule’s text yet ignorant of its deeper purpose (Study 5). Meanwhile, the intuitive tendency toward counter-literal determinations was strongest when the rule’s purpose could be inferred from its text—pointing  toward an influence  of  spontaneous mental state ascriptions (Studies  6a-6b). Together, our results elucidate the cognitive basis  of  legal reasoning: Intuitive legal determinations build on core competencies in moral cognition, including mental state and character inferences.  In turn, cognitive control dampens these effects, promoting a broadly textualist response pattern.

General Discussion 

Our present studies suggest that moral appraisals shape people’s determinations of whether various rules  have  been  violated.  Counter-literal  judgments emerge when agents violate a rule’s morally laudable purpose, but not when they violate a rule’s evil purpose (Study 1). An impact of moral appraisals  is observed even  when manipulating the transgressor’s broader moral character—such that blameworthy  agents are deemed to violate rules to a greater extent than praiseworthy agents, even when both behaviors fall within the literal scope of the rule (Study 2).  These effects persist when applying two further  robustness checks: (i) when encouraging participants to concurrently and independently  evaluate the  morality as well as the legality of the  target behaviors,  and  (ii)  when  explicitly  denying  any  constitutional constraints on the moral propriety of legal or private rules (Study 3). Turning our attention to the  underlying cognitive mechanisms,  we found that applying time pressure promoted counter-literal judgments (Study 4), suggesting that such decisions are  driven by automatic cognitive  processes.  We  then examined how representations of the agent’s knowledge impacted rule application: Stipulating the agent’s ignorance of the rule’s underlying purpose helped to explain the default tendency toward textualist determinations (Study 5). Finally, we uncovered an effect of spontaneous mental state inferences on  judgments of whether rules had been violated: Participants appeared to automatically represent the likelihood of inferring the rule’s true purpose from its text, and the inferability of a rule’s purpose yielded  greater counter-literal tendencies (Studies 6a-6b)—regardless of the agent’s actual knowledge status. 


In essence, an individual's moral judgments affect their interpretation of laws, and biases the decision-making process.

Sunday, December 5, 2021

The psychological foundations of reputation-based cooperation

Manrique, H., et al. (2021, June 2).
https://doi.org/10.1098/rstb.2020.0287

Abstract

Humans care about having a positive reputation, which may prompt them to help in scenarios where the return benefits are not obvious. Various game-theoretical models support the hypothesis that concern for reputation may stabilize cooperation beyond kin, pairs or small groups. However, such models are not explicit about the underlying psychological mechanisms that support reputation-based cooperation. These models therefore cannot account for the apparent rarity of reputation-based cooperation in other species. Here we identify the cognitive mechanisms that may support reputation-based cooperation in the absence of language. We argue that a large working memory enhances the ability to delay gratification, to understand others' mental states (which allows for perspective-taking and attribution of intentions), and to create and follow norms, which are key building blocks for increasingly complex reputation-based cooperation. We review the existing evidence for the appearance of these processes during human ontogeny as well as their presence in non-human apes and other vertebrates. Based on this review, we predict that most non-human species are cognitively constrained to show only simple forms of reputation-based cooperation.

Discussion

We have presented  four basic psychological building blocks that we consider important facilitators for complex reputation-based cooperation: working memory, delay of gratification, theory of mind, and social norms. Working memory allows for parallel processing of diverse information, to  properly  assess  others’ actions and update their  reputation  scores. Delay of gratification is useful for many types of cooperation,  but may  be particularly relevant for reputation-based cooperation where the returns come from a future interaction with an observer rather than an immediate reciprocation by one’s current partner. Theory of mind makes it easier to  properly  assess others’ actions, and  reduces the  risk that spreading  errors will undermine cooperation. Finally, norms support theory of mind by giving individuals a benchmark of what is right or wrong.  The more developed that each of these building blocks is, the more complex the interaction structure can become. We are aware that by picking these four socio-cognitive mechanisms we leave out other processes that might be involved, e.g. long-term memory, yet we think the ones we picked are more critical and better allow for comparison across species.

Thursday, August 19, 2021

A simple definition of ‘intentionally’

Quillien, T., & German, T. C.
Cognition
Volume 214, September 2021, 104806

Abstract

Cognitive scientists have been debating how the folk concept of intentional action works. We suggest a simple account: people consider that an agent did X intentionally to the extent that X was causally dependent on how much the agent wanted X to happen (or not to happen). Combined with recent models of human causal cognition, this definition provides a good account of the way people use the concept of intentional action, and offers natural explanations for puzzling phenomena such as the side-effect effect. We provide empirical support for our theory, in studies where we show that people's causation and intentionality judgments track each other closely, in everyday situations as well as in scenarios with unusual causal structures. Study 5 additionally shows that the effect of norm violations on intentionality judgments depends on the causal structure of the situation, in a way uniquely predicted by our theory. Taken together, these results suggest that the folk concept of intentional action has been difficult to define because it is made of cognitive building blocks, such as our intuitive concept of causation, whose logic cognitive scientists are just starting to understand.

From the end

People can use the word “intentionally” in very strange ways. Our intuitions about whether something is intentional are swayed by moral considerations, are pulled one way or another depending on the amount
of control an agent exerts, and are influenced by how circuitous the causal chain between the agent and the outcome is. Intentionality requires a relevant belief, but the latter can be present in very small doses.
Norm-violating actions are judged as more intentional than norm-conforming actions – except when they are judged as less intentional.

These seemingly erratic intuitions can be anxiety-inducing. One might conclude that our commonsense psychology is fundamentally moralistic; that linguistic meaning is hopelessly entangled in its context;
or that motivational and pragmatic factors constantly warp our intuitions about the proper extension of words.

We think such anxiety might be misplaced. Instead, we view the strangeness of “intentionally” as emerging naturally from the core structure of the concept. The way people use the concept of intentional
action offers a fascinating window on some of the building blocks that make up human thought: it lets us glimpse into our implicit causal model of the mind, and the algorithms with which we assign causes to events.

Sunday, May 30, 2021

Win–Win Denial: The Psychological Underpinnings of Zero-Sum Thinking

Johnson, S. G. B., Zhang, J., & Keil, F. 
(2020, April 30).
https://doi.org/10.31234/osf.io/efs5y

Abstract

A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across 4 studies (and 8 further studies in the Supplementary Materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterwards. These studies revealed that win–win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win–win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and theory of mind limits (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society.

(cut)

From the Discussion

Is Win–Win Denial Rational?

The conclusion that voluntary transactions benefit both parties rests on assumptions, and can therefore admit exceptions when these assumptions do not hold.  Voluntary trades are mutually beneficial when the parties are performing rational, selfish cost–benefit calculations and when there are no critical asymmetries in information (e.g., fraud).  There are several ways that violations of these assumptions could lead a transaction not to be win–win.  Consumers  could have inconsistent preferences over time, such that something believed to be beneficial at one time proves non-beneficial later on (e.g., liking a shirt when one buys it in the store, but growing weary of it after a couple months). Consumers could have self-control failures, making an impulse purchase that proved unwise in the longer  term.  Consumers could  have other-regarding  preferences, buying something that benefits someone else but not oneself. Finally, the consumer could be deceived by a seller who knows that the product will not satisfy their preferences (e.g., a crooked used-car salesman).

These  are  of  course  more  than  theoretical  possibilities—many demonstrations of human irrationality have been demonstrated in lab and field studies (Frederick et al., 2009; Loewenstein & Prelec, 1992; Malmandier & Tate, 2005 among many others). The key question is whether the real-world prevalence of irrationality and fraud is sufficient to justify the conclusion that ordinary consumer transactions—like those tested here—are so riddled with incompetence that our participants were right to deny that transactions are typically win–win. We respond to this challenge with four points. 

First, an empirical point. It is not just the magnitude of win–win denial of interest here, but how this magnitude responds to our experimental manipulations. It is hard to see how the effects of time-framing or cueing participants to buyers’ reasons would produce the effects that they do, independent of the mechanisms we have proposed for win–win denial (namely mercantilism and theory of mind). It is especially difficult to see why people would claim that barters make neither party better-off if the issue is exploitation. Thus, even if the magnitude of the effects is reasonable in some conditions of some of our experiments because people’s intuitions are attuned to the (allegedly) large extent of market failures, some of the patterns we see and the differences in these patterns across conditions seem to necessitate the mechanisms we propose.

Second, a sanity check. We tested intuitions about a range of typical consumer transactions in our items, finding consistent effects across items (see Part A of the Supplementary Materials). Is it really that plausible that people are impulsively hiring plumbers or that their hair stylists are routinely fraudsters? If such ordinary transactions are actually making consumers worse-off, it is very difficult to see how the rise of market economies has brought prosperity to much of the world—indeed, if win–win denial correctly  describes most consumer transactions, one should predict a negative relationship between well-being and economic activity (contradicting the large association between subjective well-being and per capita income across countries; Stevenson & Wolfers, 2013). In our view, one can acknowledge occasional consumer irrationalities, while not thereby concluding that all or most market activity is irrational, which, we submit, would fly in the face both of economic science and common sense. Actually, to claim that consumers are consistently irrational threatens paradox: The more one thinks that consumers are irrational in general,  the more  one  must  believe that participants in the current experiments are (rationally) attuned to their own irrationality.

Thursday, April 29, 2021

Why evolutionary psychology should abandon modularity

Pietraszewski, D., & Wertz, A. E. 
(2021, March 29).
https://doi.org/10.1177/1745691621997113

Abstract

A debate surrounding modularity—the notion that the mind may be exclusively composed of distinct systems or modules—has held philosophers and psychologists captive for nearly forty years. Concern about this thesis—which has come to be known as the massive modularity debate—serves as the primary grounds for skepticism of evolutionary psychology’s claims about the mind. Here we will suggest that the entirety of this debate, and the very notion of massive modularity itself, is ill-posed and confused. In particular, it is based on a confusion about the level of analysis (or reduction) at which one is approaching the mind. Here, we will provide a framework for clarifying at what level of analysis one is approaching the mind, and explain how a systemic failure to distinguish between different levels of analysis has led to profound misunderstandings of not only evolutionary psychology, but also of the entire cognitivist enterprise of approaching the mind at the level of mechanism. We will furthermore suggest that confusions between different levels of analysis are endemic throughout the psychological sciences—extending well beyond issues of modularity and evolutionary psychology. Therefore, researchers in all areas should take preventative measures to avoid this confusion in the future.

Conclusion

What has seemed to be an important but interminable debate about the nature of (massive) modularity is better conceptualized as the modularity mistake.  Clarifying the level of analys is at which one is operating will not only resolve the debate, but render it moot.  In its stead, researchers will be free to pursue much simpler, clearer, and more profound questions about how the mind works. If we proceed as usual, we will end up back in the same confused place where we started in another 40 years —arguing once again about who’s on first. Confusing or collapsing across different levels of analysis is not just a problem for modularity and evolutionary psychology.  Rather, it is the greatest problem facing early-21st-century psychology, dwarfing even the current replication crisis. Since at least the days of the neobehaviorists (e.g. Tolman, 1964), the ontology of the intentional level has become mingled with the functional level in all areas of the cognitive sciences (see Stich, 1986). Constructs such as thinking, reasoning, effort, intuition, deliberation, automaticity, and consciousness have become misunderstood and misused as functional level descriptions of how the mind works.  Appeals to  a central agency who uses “their” memory, attention, reasoning, and soon have become commonplace and unremarkable. Even the concept of cognition itself has fallen into the same levels of analysis confusion seen in the modularity mistake.  In the process, a shared notion of what it means to provide a coherent functional level (or mechanistic) description of the mind has been lost.

We do not bring up these broader issues to resolve them here.  Rather, we wish to emphasize what is at stake when it comes to being clear about levels of analysis.  If we do not respect the distinctions between levels, no amount of hard work, nor mountains of data that we will ever collect will resolve the problems created by conflating them.  The only question is whether or not we are willing to begin the slow, difficult — but ultimately clarifying and redeeming — process of unconfounding the intentional and functional levels of analysis. The modularity mistake is as good a place as any to start.

Saturday, November 28, 2020

Toward a Hierarchical Model of Social Cognition: A Neuroimaging Meta-Analysis and Integrative Review of Empathy and Theory of Mind

Schurz, M. et al.
Psychological Bulletin. 
Advance online publication. 

Abstract

Along with the increased interest in and volume of social cognition research, there has been higher awareness of a lack of agreement on the concepts and taxonomy used to study social processes. Two central concepts in the field, empathy and Theory of Mind (ToM), have been identified as overlapping umbrella terms for different processes of limited convergence. Here, we review and integrate evidence of brain activation, brain organization, and behavior into a coherent model of social-cognitive processes. We start with a meta-analytic clustering of neuroimaging data across different social-cognitive tasks. Results show that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research. A higher level describes more broad and abstract classes of functioning, whereas a lower one explains how functions are applied to concrete contexts given by particular stimulus and task formats. Specifically, the higher level of our model suggests 3 groups of neurocognitive processes: (a) predominantly cognitive processes, which are engaged when mentalizing requires self-generated cognition decoupled from the physical world; (b) more affective processes, which are engaged when we witness emotions in others based on shared emotional, motor, and somatosensory representations; (c) combined processes, which engage cognitive and affective functions in parallel. We discuss how these processes are explained by an underlying principal gradient of structural brain organization. Finally, we validate the model by a review of empathy and ToM task interrelations found in behavioral studies.

Public Significance Statement

Empathy and Theory of Mind are important human capacities for understanding others. Here, we present a meta-analysis of neuroimaging data from 4,207 participants, which shows that these abilities can be deconstructed into specific and partially shared neurocognitive subprocesses. Our findings provide systematic, large-scale support for the hypothesis that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research.

Monday, October 5, 2020

Kinship intensity and the use of mental states in moral judgment across societies

C. M. Curtain and others
Evolution and Human Behavior
Volume 41, Issue 5, September 2020, Pages 415-429

Abstract

Decades of research conducted in Western, Educated, Industrialized, Rich, & Democratic (WEIRD) societies have led many scholars to conclude that the use of mental states in moral judgment is a human cognitive universal, perhaps an adaptive strategy for selecting optimal social partners from a large pool of candidates. However, recent work from a more diverse array of societies suggests there may be important variation in how much people rely on mental states, with people in some societies judging accidental harms just as harshly as intentional ones. To explain this variation, we develop and test a novel cultural evolutionary theory proposing that the intensity of kin-based institutions will favor less attention to mental states when judging moral violations. First, to better illuminate the historical distribution of the use of intentions in moral judgment, we code and analyze anthropological observations from the Human Area Relations Files. This analysis shows that notions of strict liability—wherein the role for mental states is reduced—were common across diverse societies around the globe. Then, by expanding an existing vignette-based experimental dataset containing observations from 321 people in a diverse sample of 10 societies, we show that the intensity of a society's kin-based institutions can explain a substantial portion of the population-level variation in people's reliance on intentions in three different kinds of moral judgments. Together, these lines of evidence suggest that people's use of mental states has coevolved culturally to fit their local kin-based institutions. We suggest that although reliance on mental states has likely been a feature of moral judgment in human communities over historical and evolutionary time, the relational fluidity and weak kin ties of today's WEIRD societies position these populations' psychology at the extreme end of the global and historical spectrum.

General Discussion

We have argued that some of the variation in the use of mental states in moral judgment can be explained as a psychological calibration to the social incentives, informational constraints, and cognitive demands of kin-based institutions, which we have assessed using our construct of kinship intensity. Our examination of ethnographic accounts of norms that diminish the importance of mental states reveals that these are likely common across the ethnographic record, while our analysis of data on moral judgments of hypothetical violations from a diverse sample of ten societies indicates that kinship intensity is associated with a reduced tendency to rely on intentions in moral judgment. Together, these lines of ethnographic and psychological inquiry provide evidence that (i) the heavy reliance of contemporary, WEIRD populations on intentions is likely neither globally nor historically representative, and (ii) kinship intensity may explain some of the population-level variation in the use of mental-state reasoning in moral judgment.

The research is here.

Saturday, June 13, 2020

Rationalization is rational

Fiery Cushman
Behavioral and Brain Sciences, 43, E28.
(2020)
doi:10.1017/S0140525X19001730

Abstract

Rationalization occurs when a person has performed an action and then concocts the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories describe its underlying cognitive mechanisms, we have little understanding of its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This may accomplish an important task: transferring information between the different kinds of processes and representations that influence our behavior. Human decision making does not rely on a single process; it is influenced by reason, habit, instinct, norms, and so on. Several of these influences are not organized according to rational choice (i.e., computing and maximizing expected value). Rationalization extracts implicit information – true beliefs and useful desires – from the influence of these non-rational systems on behavior. This is a useful fiction – fiction, because it imputes reason to non-rational psychological processes; useful, because it can improve subsequent reasoning. More generally, rationalization belongs to the broader class of representational exchange mechanisms, which transfer information between many different kinds of psychological representations that guide our behavior. Representational exchange enables us to represent any information in the manner best suited to the particular tasks that require it, balancing accuracy, efficiency, and flexibility in thought. The theory of representational exchange reveals connections between rationalization and theory of mind, inverse reinforcement learning, thought experiments, and reflective equilibrium.

From the Conclusion

But human action is also shaped by non-rational forces. In these cases, any answer to the question Why did I do that? that invokes belief, desire, and reason is at best a useful fiction.  Whether or not we realize it, the question we are actually answering is: What facts would have made that worth doing? Like an amnesic government agent, we are trying to divine our programmer’s intent – to understand the nature of the world we inhabit and our purpose in it. In these cases, rationalization implements a kind of rational inference. Specifically, we infer an adaptive set of representations that guide subsequent reasoning, based on the behavioral prescriptions of non-rational systems. This inference is valid because reasoning, like non-rational processes, is ultimately designed to maximize biological fitness. It is akin to IRL as well as to Bayesian models of theory of mind, and thus it offers a new interpretation of the function of these processes.

The target article is here, along with expert commentary.

Wednesday, May 20, 2020

People judge others to have more control over beliefs than they themselves do.

Cusimano, C., & Goodwin, G. (2020, April 3).
https://doi.org/10.1037/pspa0000198

Abstract

People attribute considerable control to others over what those individuals believe. However, no work to date has investigated how people judge their own belief control, nor whether such judgments diverge from their judgments of others. We addressed this gap in seven studies and found that people judge others to be more able to voluntarily change what they believe than they themselves are. This occurs when people judge others who disagree with them (Study 1) as well as others agree with them (Studies 2-5, 7), and it occurs when people judge strangers (Studies 1-2, 4-5) as well as close others (Studies 3, 7). It appears not to be explained by impression management or self-enhancement motives (Study 3). Rather, there is a discrepancy between the evidentiary constraints on belief change that people access via introspection, and their default assumptions about the ease of voluntary belief revision. That is, people spontaneously tend to think about the evidence that supports their beliefs, which leads them to judge their beliefs as outside their control. But they apparently fail to generalize this feeling of constraint to others, and similarly fail to incorporate it into their generic model of beliefs (Studies 4-7). We discuss the implications of our findings for theories of ideology-based conflict, actor-observer biases, naïve realism, and on-going debates regarding people’s actual capacity to voluntarily change what they believe.

Conclusion

The  present  paper  uncovers  an  important  discrepancy in  how  people  think  about  their  own  and  others’  beliefs; namely, that people judge that others have a greater capacity to voluntarily change their beliefs than they, themselves do.  Put succinctly, when someone says, “You can choose to believe in God, or you can choose not to believe in God,” they may often mean that you can choose but they cannot.  We have argued that this discrepancy derives from two distinct ways people reason about belief control: either by consulting their default theory of belief, or by introspecting and reporting what they feel when they consider voluntarily changing a belief. When people apply their default theory of belief, they judge  that  they  and  others  have  considerable  control  over what they believe. But, when people consider the possibility of trying to change a particular belief, they tend to report that they have less control. Because people do not have access to the experiences of others, they rely on their generic theory of beliefs when judging others’ control. Discrepant attributions of control for self and other emerge as a result.  This may in turn have important downstream effects on people’s behavior during disagreements. More work is needed to explore these downstream effects, as well as to understand how much control people actually have over what they believe.  Predictably,we find the results from these studies compelling, but admit that readers may believe whatever they please.

The research is here.

Sunday, October 27, 2019

Language Is the Scaffold of the Mind

Anna Ivanova
nautil.us
Originally posted September 26, 2019

Can you imagine a mind without language? More specifically, can you imagine your mind without language? Can you think, plan, or relate to other people if you lack words to help structure your experiences?

Many great thinkers have drawn a strong connection between language and the mind. Oscar Wilde called language “the parent, and not the child, of thought”; Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world”; and Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.”

After all, language is what makes us human, what lies at the root of our awareness, our intellect, our sense of self. Without it, we cannot plan, cannot communicate, cannot think. Or can we?

Imagine growing up without words. You live in a typical industrialized household, but you are somehow unable to learn the language of your parents. That means that you do not have access to education; you cannot properly communicate with your family other than through a set of idiosyncratic gestures; you never get properly exposed to abstract ideas such as “justice” or “global warming.” All you know comes from direct experience with the world.

It might seem that this scenario is purely hypothetical. There aren’t any cases of language deprivation in modern industrialized societies, right? It turns out there are. Many deaf children born into hearing families face exactly this issue. They cannot hear and, as a result, do not have access to their linguistic environment. Unless the parents learn sign language, the child’s language access will be delayed and, in some cases, missing completely.

The info is here.


Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Sunday, March 18, 2018

Machine Theory of Mind

Neil C. Rabinowitz, F. Perbet, H. F. Song, C. Zhang, S.M. Ali Eslami, M. Botvinick
Artificial Intelligence
Submitted February 2018

Abstract

Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.

The research is here.

Tuesday, August 9, 2016

Fiction: Simulation of Social Worlds

By Keith Oatley
Trends in Cognitive Science
(2016) Volume 20, Issue 8, p 618–628

Here is an excerpt:

What is the basis for effects of improved empathy and theory-of-mind with engagement in fiction? Two kinds of account are possible, process and content, and they complement each other.

One kind of process is inference: engagement in fiction may involve understanding characters by inferences of the sort we make in conversation about what people mean and what kinds of people they are. In an experiment to test this hypothesis, participants were asked to read Alice Munro's The Office, a first-person short story about a woman who rents an office in which to write. In one condition, the story starts in Munro's words, which include ‘But here comes the disclosure which is not easy for me. I am a writer. That does not sound right. Too presumptuous, phony, or at least unconvincing’. In a comparison version, the story starts with readers being told directly what the narrator feels: ‘I’m embarrassed telling people that I am a writer …’ , p. 270). People who read the version in Munro's own words had to make inferences about what kind of person the narrator was and how she felt. They attained a deeper identification and understanding of the protagonist than did those who were told directly how she felt. Engagement in fiction can be thought of as practice in inference making of this kind.

A second kind of process is transportation: the extent to which people become emotionally involved, immersed, or carried away imaginatively in a story. The more transportation that occurred in reading a story, the greater the story-consistent emotional experience has been found to be. Emotion in fiction is important because, as in life, it can signal what is significant in the relation between events and our concerns [42]. In an experiment on empathetic effects, the more readers were transported into a fictional story, the greater were found to be both their empathy and their likelihood of responding on a behavioral measure: helping someone who had dropped some pencils on the floor. The vividness of imagery during reading has been found to improve transportation and to increase empathy. To investigate such imagery, participants in a functional magnetic resonance imaging (fMRI) machine were asked to imagine a scene when given between three and six spoken phrases, for instance, ‘a dark blue carpet’ … ‘a carved chest of drawers’ … ‘an orange striped pencil’. Three phrases were enough to activate the hippocampus to its largest extent and for participants to imagine a scene with maximum vividness. In another study, one group of participants listened to a story and rated the intensity of their emotions while reading. In a second group of participants, parts of the story that raters had found most emotional produced the largest changes in heart rate and greatest fMRI-based activations.

The article is here.

Thursday, December 11, 2014

Moral Evaluations Depend Upon Mindreading Moral Occurrent Beliefs

By Clayton R. Critcher, Erik G. Helzer, David Tannenbaum, and David A. Pizarro

Abstract

People evaluate the moral character of others not merely based on what they do, but why they do
it. Because an agent’s state of mind is not directly observable, people typically engage in
mindreading—attempts at inferring mental states—when forming moral evaluations. The present
paper identifies a heretofore unstudied focus of mindreading, moral occurrent beliefs—the
cognitions (e.g., thoughts, beliefs, principles, concerns, rules) accessible in an agent’s mind
while confronting a morally-relevant decision that could provide a moral justification for a
particular course of action. Whereas previous mindreading research has examined how people
“reason back” to make sense of why agents behaved as they did, we instead ask how mindread
occurrent beliefs (MOBs) constrain moral evaluations for an agent’s subsequent actions. Our
studies distinguish three accounts of how MOBs influence moral evaluations, show that people
rely on MOBs spontaneously (instead of merely when experimental measures draw attention to
them), and identify non-moral cues (e.g., whether the situation demands a quick decision) that
guide MOBs. Implications for theory of mind, moral psychology, and social cognition are
discussed.

The entire paper is here.

Wednesday, February 26, 2014

Theory of Mind: Did Evolution Fool Us?

By Marie Devaine, Guillaume Hollard, and Jean Daunizeau
PLOS One
Published: February 05, 2014 DOI: 10.1371/journal.pone.0087619

Abstract

Theory of Mind (ToM) is the ability to attribute mental states (e.g., beliefs and desires) to other people in order to understand and predict their behaviour. If others are rewarded to compete or cooperate with you, then what they will do depends upon what they believe about you. This is the reason why social interaction induces recursive ToM, of the sort “I think that you think that I think, etc.”. Critically, recursion is the common notion behind the definition of sophistication of human language, strategic thinking in games, and, arguably, ToM. Although sophisticated ToM is believed to have high adaptive fitness, broad experimental evidence from behavioural economics, experimental psychology and linguistics point towards limited recursivity in representing other’s beliefs. In this work, we test whether such apparent limitation may not in fact be proven to be adaptive, i.e. optimal in an evolutionary sense. First, we propose a meta-Bayesian approach that can predict the behaviour of ToM sophistication phenotypes who engage in social interactions. Second, we measure their adaptive fitness using evolutionary game theory. Our main contribution is to show that one does not have to appeal to biological costs to explain our limited ToM sophistication. In fact, the evolutionary cost/benefit ratio of ToM sophistication is non trivial. This is partly because an informational cost prevents highly sophisticated ToM phenotypes to fully exploit less sophisticated ones (in a competitive context). In addition, cooperation surprisingly favours lower levels of ToM sophistication. Taken together, these quantitative corollaries of the “social Bayesian brain” hypothesis provide an evolutionary account for both the limitation of ToM sophistication in humans as well as the persistence of low ToM sophistication levels.

The entire article is here.