Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Metacognition. Show all posts
Showing posts with label Metacognition. Show all posts

Saturday, September 16, 2023

A Metacognitive Blindspot in Intellectual Humility Measures

Costello, T. H., Newton, C., Lin, H., & Pennycook, G.
(2023, August 6).

Abstract

Intellectual humility (IH) is commonly defined as recognizing the limits of one’s knowledge and abilities. However, most research has relied entirely on self-report measures of IH, without testing whether these instruments capture the metacognitive core of the construct. Across two studies (Ns = 898; 914), using generalized additive mixed models to detect complex non-linear interactions, we evaluated the correspondence between widely used IH self-reports and performance on calibration and resolution paradigms designed to model the awareness of one’s mental capabilities (and their fallibility). On an overconfidence paradigm (N observations per model = 2,692-2,742), none of five IH measures attenuated the Dunning-Kruger effect, whereby poor performers overestimate their abilities and high performers underestimate them. On a confidence-accuracy paradigm (Nobservation per model = 7,223 - 12,706), most IH measures were associated with inflated confidence regardless of accuracy, or were specifically related to confidence when participants were correct but not when they were incorrect. The sole exception was the “Lack of Intellectual Overconfidence” subscale of the Comprehensive Intellectual Humility Scale, which uniquely predicted lower confidence for incorrect responses. Meanwhile, measures of Actively Open-minded Thinking reliably predicted calibration and resolution. These findings reveal substantial discrepancies between IH self-reports and metacognitive abilities, suggesting most IH measures lack validity. It may not be feasible to assess IH via self-report–as indicating a great deal of humility may, itself, be a sign of a failure in humility.

GeneralDiscussion

IH represents the ability to identify the constraints of one’s psychological, epistemic, and cultural perspective— to conduct lay phenomenology, acknowledging that the default human perspective is (literally) self-centered (Wallace, 2009) — and thereby cultivate an awareness of the limits of a single person, theory, or ideology to describe the vast and searingly complex universe. It is a process that presumably involves effortful and vigilant noticing – tallying one’s epistemic track record, and especially one’s fallibility (Ballantyne, 2021).

IH, therefore, manifests dynamically in individuals as a boundary between one’s informational environment and one’s model of reality. This portrait of IH-as-boundary appears repeatedly in philosophical and psychological treatments of IH, which frequently frame awareness of (epistemic) limitations as IH’s conceptual, metacognitive core (Leary et al., 2017; Porter, Elnakouri, et al., 2022). Yet as with a limit in mathematics, epistemic limits are appropriately defined as functions: their value is dependent on inputs (e.g., information environment, access to knowledge) that vary across contexts and individuals. Particularly, measuring IH requires identifying at least two quantities— one’s epistemic capabilities and one’s appraisal of said capabilities— from which a third, IH-qua-metacognition, can be derived as the distance between the two quantities.

Contemporary IH self-reports tend not to account for either parameter, seeming to rest instead on an auxiliary assumption: That people who are attuned to, and “own”, their epistemic limitations will generate characteristic, intellectually humble patterns of thinking and behavior. IH questionnaires then target these patterns, rather than the shared propensity for IH which the patterns ostensibly reflect.

We sought to both test and circumvent this assumption (and mono-method measurement limitation) in the present research. We did so by defining IH’s metacognitive core, functionally and statistically, in terms of calibration and resolution. We operationalized calibration as the convergence between participants’ performance on a series of epistemic tasks, on the one hand, and participants’ estimation of their own performance, on the other. Given that the relation between self-estimation and actual performance is non-linear (i.e., the Dunning-Kruger effect), there were several pathways by which IH might predict calibration: (1) decreased overestimation among low performers, (2) decreased underestimation among high performers, or (3) unilateral weakening of miscalibration among both low and high performers (for a visual representation, refer to Figure 1). Further, we operationalized epistemic resolution by assessing the relation between IH, on the one hand, individuals’ item-by-item confidence judgments for correct versus incorrect answers, on the other hand. Thus, resolution represents the capacity to distinguish between one’s correct and incorrect judgments and beliefs (a seemingly necessary prerequisite for building an accurate and calibrated model of one’s knowledge).

Thursday, April 6, 2023

People recognize and condone their own morally motivated reasoning

Cusimano, C., & Lombrozo, T. (2023).
Cognition, 234, 105379.

Abstract

People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an “illusion of objectivity.” We identify an important domain of life in which people harbor little illusion about their biases – when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2–4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People's tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.

Highlights

• Dominant theories assume people form beliefs only under an illusion of objectivity.

• We document a boundary condition on this illusion: morally desirable biases.

• People endorse beliefs they regard as evidentially weak but morally desirable.

• People realize when they have just engaged in morally motivated reasoning.

• Accurate self-attributions of moral bias fully attenuate the ‘bias blind spot’.

From the General discussion

Our beliefs about our beliefs – including whether they are biased or justified – play a crucial role in guiding inquiry, shaping belief revision, and navigating disagreement. One line of research suggests that these judgments are almost universally characterized by an illusion of objectivity such that people consciously reason with the goal of being objective and basing their beliefs on evidence, and because of this, people nearly always assume that their current beliefs meet those standards. Another line of work suggests that people sometimes think that values legitimately bear on whether someone is justified to hold a belief (Cusimano & Lombrozo, 2021b). These findings raise the possibility, consistent with some prior theoretical proposals (Cusimano & Lombrozo, 2021a; Tetlock, 2002), that people will knowingly violate norms of impartiality, or knowingly maintain beliefs that lack evidential support, when doing so advances what they consider to be morally laudable goals. Two predictions follow. First, people should evaluate their beliefs in part based on their perceived moral value. And second, in situations in which people engage in morally motivated reasoning, they should recognize that they have done so and should evaluate their morally motivated reasoning as appropriate. We document support for these predictions across four studies (Table 1).

Conclusion

A great deal of work has assumed that people treat objectivity and evidence-based reasoning as cardinal norms governing their belief formation. This assumption has grown increasingly tenuous in light of recent work highlighting the importance of moral concerns in almost all facets of life. Consistent with this recent work, we find evidence that people’s evaluations of the moral quality of a proposition predict their subjective confidence that it is true, their likelihood of claiming that they believe it and know it, and the extent to which they take their belief to be justified. Moreover, people exhibit metacognitive awareness of this fact and approve of morality’s influence on their reasoning. People often want to be right, but they also want to be good – and they know it.

Sunday, January 8, 2023

On second thoughts: changes of mind in decision-making

Stone, C., Mattingley, J. B., & Rangelov, D. (2022).
Trends in Cognitive Sciences, 26(5), 419–431.
https://doi.org/10.1016/j.tics.2022.02.004

Abstract

The ability to change initial decisions in the face of new or potentially conflicting information is fundamental to adaptive behavior. From perceptual tasks to multiple-choice tests, research has shown that changes of mind often improve task performance by correcting initial errors. Decision makers must, however, strike a balance between improvements that might arise from changes of mind and potential energetic, temporal, and psychological costs. In this review, we provide an overview of the change-of-mind literature, focusing on key behavioral findings, computational mechanisms, and neural correlates. We propose a conceptual framework that comprises two core decision dimensions – time and evidence source – which link changes of mind across decision contexts, as a first step toward an integrated psychological account of changes of mind.

Highlights
  • Changes of mind are observed during decision-making across a range of decision contexts.
  • While changes of mind are relatively infrequent, they can serve to improve overall behavioral performance by correcting initial errors.
  • Despite often improving performance, changes of mind incur energetic and temporal costs which can bias decision makers into keeping their original responses.
  • Computational models of decision-making have demonstrated that changes of mind can result from continued evidence accumulation in the post-decisional period.
  • Brain regions involved in metacognitive monitoring and affective processing are instrumental for change-of-mind behavior.

Concluding remarks

Changes of mind have received less attention in the scientific literature than the decisions which precede them. Nevertheless, existing research reveals a wealth of compelling findings, supporting changes of mind as a topic worthy of further exploration. In this review, we have covered changes of mind from a behavioral, computational, and neural perspective, and have attempted to draw parallels between disparate lines of research. To this end, we have proposed a framework comprising core decision dimensions relevant to change-of-mind behavior which we hope will foster development of an integrated account. These dimensions conceptualize changes of mind as iterative, predominantly corrective behavioral updates in the face of newly arriving evidence.

The source of this evidence, and how it is integrated into behavior, depends upon both the decision context and stage. However, the mechanisms underlying changes of mind are not equally well understood across the entire decision space. While changes of mind for perceptual decisions involving accumulation of sensory evidence over short durations have been well characterized, much work is needed to extend these insights to the complex decisions we make in everyday life.


One conclusion, ignoring contradictory evidence can account for "confirmation bias".

Sunday, October 16, 2022

A framework for understanding reasoning errors: From fake news to climate change and beyond

Pennycook, G. (2022, August 31).
https://doi.org/10.31234/osf.io/j3w7d

Abstract

Humans have the capacity, but perhaps not always the willingness, for great intelligence. From global warming to the spread of misinformation and beyond, our species is facing several major challenges that are the result of the limits of our own reasoning and decision-making. So, why are we so prone to errors during reasoning? In this chapter, I will outline a framework for understanding reasoning errors that is based on a three-stage dual-process model of analytic engagement (intuition, metacognition, and reason). The model has two key implications: 1) That a mere lack of deliberation and analytic thinking is a primary source of errors and 2) That when deliberation is activated, it generally reduces errors (via questioning intuitions and integrating new information) than increasing errors (via rationalization and motivated reasoning). In support of these claims, I review research showing the extensive predictive validity of measures that index individual differences in analytic cognitive style – even beyond explicit errors per se. In particular, analytic thinking is not only predictive of skepticism about a wide range of epistemically suspect beliefs (paranormal, conspiratorial, COVID-19 misperceptions, pseudoscience and alternative medicines) as well as decreased susceptibility to bullshit, fake news, and misinformation, but also important differences in people’s moral judgments and values as well as their religious beliefs (and disbeliefs). Furthermore, in some (but not all cases), there is evidence from experimental paradigms that support a causal role of analytic thinking in determining judgments, beliefs, and behaviors. The findings reviewed here provide some reason for optimism for the future: It may be possible to foster analytic thinking and therefore improve the quality of our decisions.

Evaluating the evidence: Does reason matter?

Thus far, I have prioritized explaining the various alternative frameworks. I will now turn to an in-depth review of some of the key relevant evidence that helps mediate between these accounts. I will organize this review around two key implications that emerge from the framework that I have proposed.

First, the primary difference between the three-stage model (and related dual-process models) and the social-intuitionist models (and related intuitionist models) is that the former argues that people should be able to overcome intuitive errors using deliberation whereas the latter argues that reason is generally infirm and therefore that intuitive errors will simply dominate. Thus, the reviewed research will investigate the apparent role of deliberation in driving people’s choices, beliefs, and behaviors.

Second, the primary difference between the three-stage model (and related dual-process models) and the identity-protective cognition model is that the latter argues that deliberation facilitates biased information processing whereas the former argues that deliberation generally facilitates accuracy. Thus, the reviewed research will also focus on whether deliberation is linked with inaccuracy in politically-charged or identity-relevant contexts.

Wednesday, August 3, 2022

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al.
Nat Rev Psychol (2022). 
https://doi.org/10.1038/s44159-022-00081-9

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

Social implications

People who score higher in intellectual humility are more likely to display tolerance of opposing political and religious views, exhibit less hostility toward members of those opposing groups, and are more likely to resist derogating outgroup members as intellectually and morally bankrupt. Although intellectually humbler people are capable of intergroup prejudice, they are more willing to question themselves and to consider rival viewpoints104. Indeed, people with greater intellectual humility display less myside bias, expose themselves to opposing perspectives more often and show greater openness to befriending outgroup members on social media platforms. By comparison, people with lower intellectual humility display features of cognitive rigidity and are more likely to hold inflexible opinions and beliefs.

Thursday, February 17, 2022

Filling the gaps: Cognitive control as a critical lens for understanding mechanisms of value-based decision-making.

Frömer, R., & Shenhav, A. (2021, May 17). 
https://doi.org/10.31234/osf.io/dnvrj

Abstract

While often seeming to investigate rather different problems, research into value-based decision making and cognitive control have historically offered parallel insights into how people select thoughts and actions. While the former studies how people weigh costs and benefits to make a decision, the latter studies how they adjust information processing to achieve their goals. Recent work has highlighted ways in which decision-making research can inform our understanding of cognitive control. Here, we provide the complementary perspective: how cognitive control research has informed understanding of decision-making. We highlight three particular areas of research where this critical interchange has occurred: (1) how different types of goals shape the evaluation of choice options, (2) how people use control to adjust how they make their decisions, and (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales. We show how adopting this alternate viewpoint offers new insight into the determinants of both decisions and control; provides alternative interpretations for common neuroeconomic findings; and generates fruitful directions for future research.

Highlights

•  We review how taking a cognitive control perspective provides novel insights into the mechanisms of value based choice.

•  We highlight three areas of research where this critical interchange has occurred:

      (1) how different types of goals shape the evaluation of choice options,

      (2) how people use control to adjust how they make their decisions, and

      (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales.

From Exerting Control Beyond Our Current Choice

We have so far discussed choices the way they are typically studied:in isolation. However, we don’t make choices in a vacuum, and our current choices depend on previous choices we have made (Erev & Roth, 2014; Keung, Hagen, & Wilson, 2019; Talluri et al., 2020; 618Urai, Braun, & Donner, 2017; Urai, de Gee, Tsetsos, & Donner, 2019). One natural way in which choices influence each other is through learning about the options, where the evaluations of the outcome of one choice refines the expected value (incorporating range and probability) assigned to that option in future choices (Fontanesi, Gluth, et al., 2019; Fontanesi, Palminteri, et al., 2019; Miletic et al., 2021).  Here we focus on a different, complementary way, central to cognitive control research, where evaluations of the process of ongoing and past choices inform the process of future choices(Botvinick et al., 1999; Bugg, Jacoby, & Chanani, 2011; Verguts, Vassena, & Silvetti, 2015). In cognitive control research, these choice evaluations and their influence on subsequent adaptation are studied under the umbrella of performance monitoring (Carter et al., 1998; Ullsperger, Fischer, Nigbur, & Endrass, 2014). Unlike option-based learning, performance monitoring influences not only which options are chosen, but also how subsequent choices are made. It also informs higher order decisions about strategy and task selection(Fig. 6305A).

Saturday, September 25, 2021

The prefrontal cortex and (uniquely) human cooperation: a comparative perspective

Zoh, Y., Chang, S.W.C. & Crockett, M.J.
Neuropsychopharmacol. (2021). 

Abstract

Humans have an exceptional ability to cooperate relative to many other species. We review the neural mechanisms supporting human cooperation, focusing on the prefrontal cortex. One key feature of human social life is the prevalence of cooperative norms that guide social behavior and prescribe punishment for noncompliance. Taking a comparative approach, we consider shared and unique aspects of cooperative behaviors in humans relative to nonhuman primates, as well as divergences in brain structure that might support uniquely human aspects of cooperation. We highlight a medial prefrontal network common to nonhuman primates and humans supporting a foundational process in cooperative decision-making: valuing outcomes for oneself and others. This medial prefrontal network interacts with lateral prefrontal areas that are thought to represent cooperative norms and modulate value representations to guide behavior appropriate to the local social context. Finally, we propose that more recently evolved anterior regions of prefrontal cortex play a role in arbitrating between cooperative norms across social contexts, and suggest how future research might fruitfully examine the neural basis of norm arbitration.

Conclusion

The prefrontal cortex, in particular its more anterior regions, has expanded dramatically over the course of human evolution. In tandem, the scale and scope of human cooperation has dramatically outpaced its counterparts in nonhuman primate species, manifesting as complex systems of moral codes that guide normative behaviors even in the absence of punishment or repeated interactions. Here, we provided a selective review of the neural basis of human cooperation, taking a comparative approach to identify the brain systems and social behaviors that are thought to be unique to humans. Humans and nonhuman primates alike cooperate on the basis of kinship and reciprocity, but humans are unique in their abilities to represent shared goals and self-regulate to comply with and enforce cooperative norms on a broad scale. We highlight three prefrontal networks that contribute to cooperative behavior in humans: a medial prefrontal network, common to humans and nonhuman primates, that values outcomes for self and others; a lateral prefrontal network that guides cooperative goal pursuit by modulating value representations in the context of local norms; and an anterior prefrontal network that we propose serves uniquely human abilities to reflect on one’s own behavior, commit to shared social contracts, and arbitrate between cooperative norms across diverse social contexts. We suggest future avenues for investigating cooperative norm arbitration and how it is implemented in prefrontal networks.

Saturday, April 24, 2021

Bias Blind Spot: Structure, Measurement, and Consequences

Irene Scopelliti,  et al.
Management Science 61(10):
2468-2486.

Abstract

People exhibit a bias blind spot: they are less likely to detect bias in themselves than in others. We report the development and validation of an instrument to measure individual differences in the propensity to exhibit the bias blind spot that is unidimensional, internally consistent, has high test-retest reliability, and is discriminated from measures of intelligence, decision-making ability, and personality traits related to self-esteem, self-enhancement, and self-presentation. The scale is predictive of the extent to which people judge their abilities to be better than average for easy tasks and worse than average for difficult tasks, ignore the advice of others, and are responsive to an intervention designed to mitigate a different judgmental bias. These results suggest that the bias blind spot is a distinct metabias resulting from naïve realism rather than other forms of egocentric cognition, and has unique effects on judgment and behavior.

Conclusion

We find that bias blind spot is a latent factor in self-assessments of relative vulnerability to bias. This meta-bias affected the majority of participants in our samples, but exhibited considerable variance across
participants. We present a concise, reliable, and valid measure of individual differences in bias blind spot
that has the ability to predict related biases in self-assessment, advice taking, and responsiveness to bias
reduction training. Given the influence of bias blind spot on consequential judgments and decisions, as
well as receptivity to training, this measure may prove useful across a broad range of domains such as personnel assessment, information analysis, negotiation, consumer decision making, and education.

Monday, December 28, 2020

Bias in bias recognition: People view others but not themselves as biased by preexisting beliefs and social stigmas

Wang Q, Jeon HJ (2020) 
PLoS ONE 15(10): e0240232. 
https://doi.org/10.1371/journal.pone.0240232

Abstract

Biases perpetuate when people think that they are innocent whereas others are guilty of biases. We examined whether people would detect biased thinking and behavior in others but not themselves as influenced by preexisting beliefs (myside bias) and social stigmas (social biases). The results of three large studies showed that, across demographic groups, participants attributed more biases to others than to themselves, and that this self-other asymmetry was particularly salient among those who hold strong beliefs about the existence of biases (Study 1 and Study 2). The self-other asymmetry in bias recognition dissipated when participants made simultaneous predictions about others’ and their own thoughts and behaviors (Study 3). People thus exhibit bias in bias recognition, and this metacognitive bias may be remedied when it is highlighted to people that we are all susceptible to biasing influences.

From the Discussion

Indeed, the current studies reveal the critical role of explicit beliefs about biases in underlying the biased reasoning concerning one’s own and others’ thoughts and behaviors: The more strongly people believed that biases widely existed, the more inclined they were to ascribe biases to others but not themselves. These findings suggest that the conviction that the world is generally biased and yet the self is the exception contributes to the self-other asymmetry in bias recognition. They further suggest important individual differences whereby some individuals more strongly believe that myside bias and social biases widely exist and yet convince themselves that “I’m not one of them” when making judgements about these biases in everyday situations. In comparison, individuals who held weaker beliefs about the biases attributed less bias overall and exhibited less self-other asymmetry in recognizing the biases. These findings thus provide valuable information for future focus-group interventions. They further suggest that when learning about bias, as occurs in most introductory psychology classes, students should be reminded that they are equally susceptible as others to biasing influences.

Wednesday, June 10, 2020

Metacognition in moral decisions: judgment extremity and feeling of rightness in moral intuitions

Solange Vega and others
Thinking & Reasoning

This research investigated the metacognitive underpinnings of moral judgment. Participants in two studies were asked to provide quick intuitive responses to moral dilemmas and to indicate their feeling of rightness about those responses. Afterwards, participants were given extra time to rethink their responses, and change them if they so wished. The feeling of rightness associated with the initial judgments was predictive of whether participants chose to change their responses and how long they spent rethinking them. Thus, one’s metacognitive experience upon first coming up with a moral judgment influences whether one sticks to that initial gut feeling or decides to put more thought into it and revise it. Moreover, while the type of moral judgment (i.e., deontological vs. utilitarian) was not consistently predictive of metacognitive experience, the extremity of that judgment was: Extreme judgments (either deontological or utilitarian) were quicker and felt more right than moderate judgments.

From the General Discussion

Also consistent with Bago and De Neys’ findings (2018), these results show that few people revise their responses from one type of moral judgment to the other (i.e., from deontological to utilitarian, or vice-versa). Still,many people do revise their responses, though these are subtler revisions of extremity within one type of response. These results speak against the traditional corrective model, whereby people tend to change from deontological intuitions to utilitarian deliberations in the course of making moral judgments. At the same time, they suggest a more nuanced perspective than what one might conclude from Bago and De Neys’results that fewpeople revise their responses. In sum, few people make revisions in the kind of response they give, but many do revise the degree to which they defend a certain moral position.

The research is here.

Thursday, May 14, 2020

Is justice blind or myopic? An examination of the effects of meta-cognitive myopia and truth bias on mock jurors and judges

M. Pantazi, O. Klein, & M. Kissine
Judgment and Decision Making, 
Vol. 15, No. 2, March 2020, pp. 214-229

Abstract

Previous studies have shown that people are truth-biased in that they tend to believe the information they receive, even if it is clearly flagged as false. The truth bias has been recently proposed to be an instance of meta-cognitive myopia, that is, of a generalized human insensitivity towards the quality and correctness of the information available in the environment. In two studies we tested whether meta-cognitive myopia and the ensuing truth bias may operate in a courtroom setting. Based on a well-established paradigm in the truth-bias literature, we asked mock jurors (Study 1) and professional judges (Study 2) to read two crime reports containing aggravating or mitigating information that was explicitly flagged as false. Our findings suggest that jurors and judges are truth-biased, as their decisions and memory about the cases were affected by the false information. We discuss the implications of the potential operation of the truth bias in the courtroom, in the light of the literature on inadmissible and discredible evidence, and make some policy suggestions.

From the Discussion:

Fortunately, the judiciary system is to some extent shielded by intrusions of illegitimate evidence, since objections are most often raised before a witness’s answer or piece of evidence is presented in court. Therefore, most of the time, inadmissible or false evidence is prevented from entering the fact-finders’ mental representations of a case in the first place. Nevertheless, objections can also be raised after a witnesses’ response has been given. Such objections may not actually protect the fact-finders from the information that has already been presented. An important question that remains open from a policy perspective is therefore how we are to safeguard the rules of evidence, given the fact-finders’ inability to take such meta-information into account.

The research is here.

Tuesday, July 30, 2019

Is belief superiority justified by superior knowledge?

Michael P.Hall & Kaitlin T.Raimi
Journal of Experimental Social Psychology
Volume 76, May 2018, Pages 290-306

Abstract

Individuals expressing belief superiority—the belief that one's views are superior to other viewpoints—perceive themselves as better informed about that topic, but no research has verified whether this perception is justified. The present research examined whether people expressing belief superiority on four political issues demonstrated superior knowledge or superior knowledge-seeking behavior. Despite perceiving themselves as more knowledgeable, knowledge assessments revealed that the belief superior exhibited the greatest gaps between their perceived and actual knowledge. When given the opportunity to pursue additional information in that domain, belief-superior individuals frequently favored agreeable over disagreeable information, but also indicated awareness of this bias. Lastly, experimentally manipulated feedback about one's knowledge had some success in affecting belief superiority and resulting information-seeking behavior. Specifically, when belief superiority is lowered, people attend to information they may have previously regarded as inferior. Implications of unjustified belief superiority and biased information pursuit for political discourse are discussed.

The research is here.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018

Abstract

Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Thursday, September 29, 2016

Priming Children’s Use of Intentions in Moral Judgement with Metacognitive Training

Gvozdic, Katarina and others
Frontiers in Psychology  
18 March 2016
http://dx.doi.org/10.3389/fpsyg.2016.00190

Abstract

Typically, adults give a primary role to the agent's intention to harm when performing a moral judgment of accidental harm. By contrast, children often focus on outcomes, underestimating the actor's mental states when judging someone for his action, and rely on what we suppose to be intuitive and emotional processes. The present study explored the processes involved in the development of the capacity to integrate agents' intentions into their moral judgment of accidental harm in 5 to 8-year-old children. This was done by the use of different metacognitive trainings reinforcing different abilities involved in moral judgments (mentalising abilities, executive abilities, or no reinforcement), similar to a paradigm previously used in the field of deductive logic. Children's moral judgments were gathered before and after the training with non-verbal cartoons depicting agents whose actions differed only based on their causal role or their intention to harm. We demonstrated that a metacognitive training could induce an important shift in children's moral abilities, showing that only children who were explicitly instructed to "not focus too much" on the consequences of accidental harm, preferentially weighted the agents' intentions in their moral judgments. Our findings confirm that children between the ages of 5 and 8 are sensitive to the intention of agents, however, at that age, this ability is insufficient in order to give a "mature" moral judgment. Our experiment is the first that suggests the critical role of inhibitory resources in processing accidental harm.

The article is here.

Sunday, October 18, 2015

Haunts or Helps from the Past: Understanding the Effect of Recall on Current Self-Control

Hristina Nikolova, Cait Lamberton, and Kelly L. Haws
Journal of Consumer Psychology
Available online 30 June 2015

Scientific Abstract

Conventional wisdom suggests that remembering our past, and particularly, the mistakes we have made, will help us make better decisions in the present. But how successful is this practice in the domain of self-control? Our work examines how the content of consumers' recollections (past self-control successes versus failures) and the subjective difficulty with which this content comes to mind (easily or with difficulty) jointly shape consumers' self-control decisions. When successes are easy to recall, we find that people display more self-control than when they have difficulty recalling successes.  However, recalling failures prompts indulgence regardless of its difficulty. We suggest that these differences in behavior may exist because recalling failures has substantially different affective and cognitive consequences than does recalling successes. Consistent with this theory, we demonstrate that self-certainty moderates the effects of recall on self-control. Taken together, this work enhances our understanding of self-control, self-perceptions, and metacognition.

Layperson interpretation can be found here.

Professional article can be found here.

Friday, September 19, 2014

Using metacognitive cues to infer others’ thinking

André Mata and Tiago Almeida
Judgment and Decision Making 9.4 (Jul 2014): 349-359.

Abstract

Three studies tested whether people use cues about the way other people think--for example, whether others respond fast vs. slow--to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think of deliberative responses, they are aware that others might respond intuitively, particularly in circumstances that hinder deliberative thinking (e.g., fast responding). Intuitive responders, on the other hand, are less aware of alternative responses to theirs, so they infer that other people respond as they do, regardless of the way others respond.

The entire article is here.

This article is important when contemplating ethical decision-making.

Friday, May 16, 2014

How Chicago is using psychotherapy to fight crime — and winning

By Dylan Matthews
Vox
Originally published May 1, 2014

The basics

The program in question is called Becoming a Man (BAM), and was developed by the nonprofits Youth Guidance and World Sport Chicago for use in Chicago schools. BAM consists of weekly hour-long sessions with groups of no more than 15 high school boys (the average instructor-student ratio is 1 to 8). It's not therapy in the strictest of senses, but the overall approach is borrowed from cognitive-behavioral therapy (CBT), which has overtaken more Freudian approaches in recent decades among practitioners and has a large research base demonstrating its effectiveness:




CBT is all about teaching meta-cognition: thinking about thinking. In a pure therapy setting, that means teaching patients to identify thought patterns that contribute to depression, anxiety, and so forth, so that they can work to replace them with healthier patterns. For example, a common negative thought pattern is catastrophizing, or exaggerating the importance of a short-term negative event in a way that causes undue distress and overreaction; if you've ever gotten a small piece of negative feedback from your boss and within a few minutes started worrying that you're about to get fired, that's catastrophizing in action.

The entire article is here.

Friday, January 3, 2014

What is a mind and what is it good for?

By Damon Young
New Philosopher Magazine
Originally published December 18, 2013

“Philosophy,” wrote John Keats, “will clip an Angel’s wings. “This is the caricature: philosophers are brutal bastards, who cut beautiful things with logical clippers.

But philosophy is rarely malicious. Philosophy is often driven by something like love. From a comradely familiarity, to gentle romance, to manic lust, philosophers care about ideas. The ancient Greek word literally means this: “the love of wisdom”. It is a longing, not just for beautiful ideas, but for faithful ones: ideas that are true in some way.

This is why Alfred North White-head called philosophy the “critic of abstractions”: it seeks to test their fidelity.

The entire article is here.

Monday, November 11, 2013

Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality

By Robert Wright
The Atlantic
November 2013

The article is really a review of several books.  However, it is not a formal book review, but compares and contrasts efforts by those studying morality, psychology, and biology.  Here are some excerpts:


The well-documented human knack for bigotry, conflict, and atrocity must have something to do with the human mind, and relevant parts of the mind are indeed coming into focus—not just thanks to the revolution in brain scanning, or even advances in neuroscience more broadly, but also thanks to clever psychology experiments and a clearer understanding of the evolutionary forces that shaped human nature. Maybe we’re approaching a point where we can actually harness this knowledge, make radical progress in how we treat one another, and become a species worthy of the title Homo sapiens.

(cut)

...the impulses and inclinations that shape moral discourse are, by and large, legacies of natural selection, rooted in our genes. Specifically, many of them are with us today because they helped our ancestors realize the benefits of cooperation. As a result, people are pretty good at getting along with one another, and at supporting the basic ethical rules that keep societies humming.

(cut)

When you combine judgment that’s naturally biased with the belief that wrongdoers deserve to suffer, you wind up with situations like two people sharing the conviction that the other one deserves to suffer. Or two groups sharing that conviction. And the rest is history. Rwanda’s Hutus and Tutsis, thanks to their common humanity, shared the intuition that bad people should suffer; they just disagreed—thanks to their common humanity—about which group was bad.

The entire article is here.