Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Self-Report. Show all posts
Showing posts with label Self-Report. Show all posts

Saturday, September 16, 2023

A Metacognitive Blindspot in Intellectual Humility Measures

Costello, T. H., Newton, C., Lin, H., & Pennycook, G.
(2023, August 6).

Abstract

Intellectual humility (IH) is commonly defined as recognizing the limits of one’s knowledge and abilities. However, most research has relied entirely on self-report measures of IH, without testing whether these instruments capture the metacognitive core of the construct. Across two studies (Ns = 898; 914), using generalized additive mixed models to detect complex non-linear interactions, we evaluated the correspondence between widely used IH self-reports and performance on calibration and resolution paradigms designed to model the awareness of one’s mental capabilities (and their fallibility). On an overconfidence paradigm (N observations per model = 2,692-2,742), none of five IH measures attenuated the Dunning-Kruger effect, whereby poor performers overestimate their abilities and high performers underestimate them. On a confidence-accuracy paradigm (Nobservation per model = 7,223 - 12,706), most IH measures were associated with inflated confidence regardless of accuracy, or were specifically related to confidence when participants were correct but not when they were incorrect. The sole exception was the “Lack of Intellectual Overconfidence” subscale of the Comprehensive Intellectual Humility Scale, which uniquely predicted lower confidence for incorrect responses. Meanwhile, measures of Actively Open-minded Thinking reliably predicted calibration and resolution. These findings reveal substantial discrepancies between IH self-reports and metacognitive abilities, suggesting most IH measures lack validity. It may not be feasible to assess IH via self-report–as indicating a great deal of humility may, itself, be a sign of a failure in humility.

GeneralDiscussion

IH represents the ability to identify the constraints of one’s psychological, epistemic, and cultural perspective— to conduct lay phenomenology, acknowledging that the default human perspective is (literally) self-centered (Wallace, 2009) — and thereby cultivate an awareness of the limits of a single person, theory, or ideology to describe the vast and searingly complex universe. It is a process that presumably involves effortful and vigilant noticing – tallying one’s epistemic track record, and especially one’s fallibility (Ballantyne, 2021).

IH, therefore, manifests dynamically in individuals as a boundary between one’s informational environment and one’s model of reality. This portrait of IH-as-boundary appears repeatedly in philosophical and psychological treatments of IH, which frequently frame awareness of (epistemic) limitations as IH’s conceptual, metacognitive core (Leary et al., 2017; Porter, Elnakouri, et al., 2022). Yet as with a limit in mathematics, epistemic limits are appropriately defined as functions: their value is dependent on inputs (e.g., information environment, access to knowledge) that vary across contexts and individuals. Particularly, measuring IH requires identifying at least two quantities— one’s epistemic capabilities and one’s appraisal of said capabilities— from which a third, IH-qua-metacognition, can be derived as the distance between the two quantities.

Contemporary IH self-reports tend not to account for either parameter, seeming to rest instead on an auxiliary assumption: That people who are attuned to, and “own”, their epistemic limitations will generate characteristic, intellectually humble patterns of thinking and behavior. IH questionnaires then target these patterns, rather than the shared propensity for IH which the patterns ostensibly reflect.

We sought to both test and circumvent this assumption (and mono-method measurement limitation) in the present research. We did so by defining IH’s metacognitive core, functionally and statistically, in terms of calibration and resolution. We operationalized calibration as the convergence between participants’ performance on a series of epistemic tasks, on the one hand, and participants’ estimation of their own performance, on the other. Given that the relation between self-estimation and actual performance is non-linear (i.e., the Dunning-Kruger effect), there were several pathways by which IH might predict calibration: (1) decreased overestimation among low performers, (2) decreased underestimation among high performers, or (3) unilateral weakening of miscalibration among both low and high performers (for a visual representation, refer to Figure 1). Further, we operationalized epistemic resolution by assessing the relation between IH, on the one hand, individuals’ item-by-item confidence judgments for correct versus incorrect answers, on the other hand. Thus, resolution represents the capacity to distinguish between one’s correct and incorrect judgments and beliefs (a seemingly necessary prerequisite for building an accurate and calibrated model of one’s knowledge).

Wednesday, October 9, 2019

Moral and religious convictions: Are they the same or different things?

Skitka LJ, Hanson BE, Washburn AN, Mueller AB (2018)
PLoS ONE 13(6): e0199311.
https://doi.org/10.1371/journal.pone.0199311

Abstract

People often assume that moral and religious convictions are functionally the same thing. But are they? We report on 19 studies (N = 12,284) that tested whether people’s perceptions that their attitudes are reflections of their moral and religious convictions across 30 different issues were functionally the same (the equivalence hypothesis) or different constructs (the distinct constructs hypothesis), and whether the relationship between these constructs was conditional on political orientation (the political asymmetry hypothesis). Seven of these studies (N = 5,561, and 22 issues) also had data that allowed us to test whether moral and religious conviction are only closely related for those who are more rather than less religious (the secularization hypothesis), and a narrower form of the political asymmetry and secularization hypotheses, that is, that people’s moral and religious convictions may be tightly connected constructs only for religious conservatives. Meta-analytic tests of each of these hypotheses yielded weak support for the secularization hypothesis, no support for the equivalence or political asymmetry hypotheses, and the strongest support for the distinct constructs hypothesis.

From the Discussion

People’s lay theories often confound these constructs: If something is perceived as religious, it will also be perceived as moral (and vice versa). Contrary to both people’s lay theories and various scholarly theories of religion, however, we found that the degree to which people perceive a given attitude as a moral or religious conviction is largely orthogonal, sharing only on average 14% common variance.

(cut)

Religious and moral conviction were more strongly related to each other among the religious than the non-religious for 59% of the issues we examined, a finding consistent with the secularization hypothesis. That said, the effect size in support of the secularization hypothesis was very small; the interaction of religiosity and religious conviction only explained a little more than 1% of the variance in moral conviction overall. Taken together, the overwhelming evidence therefore seems most consistent with the distinct constructs hypothesis: Moral and religious convictions are largely independent constructs.

Wednesday, April 19, 2017

Should healthcare professionals breach confidentiality when a patient is unfit to drive?

Daniel Sokol
The British Medical Journal
2017;356:j1505

Here are two excerpts:

The General Medical Council (GMC) has guidance on reporting concerns to the Driver and Vehicle Licensing Agency (DVLA). Doctors should explain to patients deemed unfit to drive that their condition may affect their ability to drive and that they—the patients—have a legal obligation to inform the DVLA about their condition.

(cut)

The trouble with this approach is that it relies on patients’ honesty. As far back as Hippocratic times, doctors were instructed to look out for the lies of patients. Two and a half thousand years later the advice still holds true. In a 1994 study on 754 adult patients, Burgoon and colleagues found that 85% admitted to concealing information from their doctors, and over a third said that they had lied outright. Many patients will lie to avoid the loss of their driving licence. They will falsely promise to inform the DVLA and to stop driving. And the chances of the doctor discovering that the patient is continuing to drive are slim.

The article is here.

Tuesday, July 26, 2016

How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

Horne Z, Powell D (2016)
PLoS ONE 11(7): e0154780.
doi: 10.1371/journal.pone.0154780

Abstract

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely
accepted that emotions are not simply experienced alongside people’s judgments about
moral dilemmas, but that our affective processes play a central role in determining those
judgments. However, much of the evidence purporting to demonstrate the connection
between people’s emotional responses and their judgments about moral dilemmas has
recently been called into question. In the present studies, we reexamined the role of emotion
in people’s judgments about moral dilemmas using a validated self-report measure of
emotion. We measured participants’ specific emotional responses to moral dilemmas and,
although we found that moral dilemmas evoked strong emotional responses, we found that
these responses were only weakly correlated with participants’ moral judgments. We argue
that the purportedly strong connection between emotion and judgments of moral dilemmas
may have been overestimated.

The article is here.

Friday, September 26, 2014

Did We Interpret the Milgram Study Incorrectly?

Famous Milgram 'electric shocks' experiment drew wrong conclusions about evil, say psychologists

By Adam Sherwin
The Independent
Originally published September 5, 2014

Here are two excerpts:

Now psychologists have found that the study, which showed how ordinary people will inflict extraordinary harm upon others, if someone in authority gives the orders, may have been completely misunderstood.

Instead of a latent capacity for evil, we just want to feel good about ourselves. And it is Professor Stanley Milgram’s skill as a “dramatist” which led us to believe otherwise.

(cut)

Far from being distressed by the experience, the researchers found that most volunteers said they were very happy to have participated.

Professor Haslam said: “It appears from this feedback that the main reason participants weren’t distressed is that they did not think they had done anything wrong.  This was largely due to Milgram’s ability to convince them that they had made an important contribution to science.”

The entire article is here.

Wednesday, June 18, 2014

What Are the Implications of the Free Will Debate for Individuals and Society?

By Alfred Mele
Big Questions Online
Originally posted May 6, 2014

Does free will exist? Current interest in that question is fueled by news reports suggesting that neuroscientists have proved it doesn’t. In the last few years, I’ve been on a mission to explain why scientific discoveries haven’t closed the door on free will. To readers interested in a rigorous explanation, I recommend my 2009 book, Effective Intentions. For a quicker read, you might wait for my Free: Why Science Hasn’t Disproved Free Will, to be published this fall.

One major plank in a well-known neuroscientific argument for the nonexistence of free will is the claim that participants in various experiments make their decisions unconsciously. In some studies, this claim is based partly on EEG readings (electrical readings taken from the scalp). In others, fMRI data (about changes in blood oxygen levels in the brain) are used instead. In yet others, with people whose skulls are open for medical purposes, readings are taken directly from the brain. The other part of the evidence comes from participants’ reports on when they first became aware of their decisions. If the reports are accurate (which is disputed), the typical sequence of events is as follows: first, there is the brain activity the scientists focus on, then the participants become aware of decisions (or intentions or urges) to act, and then they act, flexing a wrist or pushing a button, for example.

The entire article is here.

Friday, May 2, 2014

Q&A: Why 40% of us think we're in the top 5%

By Christie Nicholson
www.smartplanet.com
Originally published April 15, 2014

Here are two excerpt:

Since then Dunning has performed many studies on incompetence. And he has uncovered something particularly disturbing: We humans are terrible at self-assessment, often grading ourselves as far more intelligent and capable than we actually are. This widespread inability can lead to negative consequences for management and for recognizing genius.

(cut)

Giving feedback especially in the workplace is a very touchy situation, and companies make reviews more touchy by directly connecting it to things like pay raises. There are two reasons people may not be receptive to feedback: One is it’s going to come as a complete surprise to them, because they probably don’t know what their weaknesses are, second is that it’s just a natural human tendency to be defensive.

So, you have to work around that. There are three different things you can do as a manager. The first thing is if you are going to give feedback make sure that it’s about a person’s behavior or their actions. Do not make it about their character or their ability.

The entire article is here.

Wednesday, July 3, 2013

Why Don’t Cops Believe Rape Victims?

Brain science helps explain the problem—and solve it.

By Rebecca Ruiz
Slate.com
Originally posted June 19, 2013

Here are some excerpts:

This is rape culture in action. It puts the burden of proving innocence on the victim, and from Steubenville, Ohio, to Notre Dame and beyond, we’ve seen it poison cases and destroy lives. But science is telling us that our suspicions of victims, the ones that seem like common sense, are flat-out baseless. A number of recent studies on neurobiology and trauma show that the ways in which the brain processes harrowing events accounts for victim behavior that often confounds cops, prosecutors, and juries.

These findings have led to a fundamental shift in the way experts who grasp the new science view the investigation of rape cases—and led them to a better method for interviewing victims. The problem is that the country’s 18,000 law enforcement agencies haven’t been converted. Or at least, most aren’t yet receiving the training to improve their own interview procedures. The exception, it turns out, is the military. Despite its many failings in sexual assault cases, it has actually been at the vanguard of translating the new research into practical tools for investigating rape.

(cut)

This is why, experts say, sexual assault victims often can’t give a linear account of an attack and instead focus on visceral sensory details like the smell of cologne or the sound of voices in the hallway. “That’s simply because their brain has encoded it in this fragmented way,” says David Lisak, a clinical psychologist and forensic consultant who trains civilian and military law enforcement to understand victim and offender behavior.

The entire story is here.