Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Systematic Review. Show all posts
Showing posts with label Systematic Review. Show all posts

Saturday, August 6, 2022

A General Model of Cognitive Bias in Human Judgment and Systematic Review Specific to Forensic Mental Health

Neal, T. M. S., Lienert, P., Denne, E., & 
Singh, J. P. (2022).  
Law and Human Behavior, 46(2), 99–120.
https://doi.org/10.1037/lhb0000482

Abstract

Cognitive biases can impact experts’ judgments and decisions. We offer a broad descriptive model of how bias affects human judgment. Although studies have explored the role of cognitive biases and debiasing techniques in forensic mental health, we conducted the first systematic review to identify, evaluate, and summarize the findings. Hypotheses. Given the exploratory nature of this review, we did not test formal hypotheses. General research questions included the proportion of studies focusing on cognitive biases and/or debiasing, the research methods applied, the cognitive biases and debiasing strategies empirically studied in the forensic context, their effects on forensic mental health decisions, and effect sizes.

Public Significance Statement

Evidence of bias in forensic mental health emerged in ways consistent with what we know about human judgment broadly. We know less about how to debias judgments—an important frontier for future research. Better understanding how bias works and developing effective debiasing strategies tailored to the forensic mental health context hold promise for improving quality. Until then, we can use what we know now to limit bias in our work.

From the Discussion section

Is Bias a Problem for the Field of Forensic Mental Health?

Our interpretation of the judgment and decision-making literature more broadly, as well as the results from this systematic review conducted in this specific context, is that bias is an issue that deserves attention in forensic mental health—with some nuance. The overall assertion that bias is worthy of concern in forensic mental health rests both on the broader and the more specific literatures we reference here.

The broader literature is robust, revealing that well-studied biases affect human judgment and social cognition (e.g., Gilovich et al., 2002; Kahneman, 2011; see Figure 1). Although the field is robust in terms of individual studies demonstrating cognitive biases, decision science needs a credible, scientific organization of the various types of cognitive biases that have proliferated to better situate and organize the field. Even in the apparent absence of such an organizational structure, it is clear that biases influence consequential judgments not just for laypeople but for experts too, such as pilots (e.g., Walmsley & Gilbey, 2016), intelligence analysts (e.g., Reyna et al., 2014), doctors (e.g., Drew et al., 2013), and judges and lawyers (e.g., Englich et al., 2006; Girvan et al., 2015; Rachlinski et al., 2009). Given that forensic mental health experts are human, as are these other experts who demonstrate typical biases by virtue of being human, there is no reason to believe that forensic experts have automatic special protection against bias by virtue of their expertise.

Wednesday, June 1, 2022

The ConTraSt database for analysing and comparing empirical studies of consciousness theories

Yaron, I., Melloni, L., Pitts, M. et al.
Nat Hum Behav (2022).
https://doi.org/10.1038/s41562-021-01284-5

Abstract

Understanding how consciousness arises from neural activity remains one of the biggest challenges for neuroscience. Numerous theories have been proposed in recent years, each gaining independent empirical support. Currently, there is no comprehensive, quantitative and theory-neutral overview of the field that enables an evaluation of how theoretical frameworks interact with empirical research. We provide a bird’s eye view of studies that interpreted their findings in light of at least one of four leading neuroscientific theories of consciousness (N = 412 experiments), asking how methodological choices of the researchers might affect the final conclusions. We found that supporting a specific theory can be predicted solely from methodological choices, irrespective of findings. Furthermore, most studies interpret their findings post hoc, rather than a priori testing critical predictions of the theories. Our results highlight challenges for the field and provide researchers with an open-access website (https://ContrastDB.tau.ac.il) to further analyse trends in the neuroscience of consciousness.

Discussion

Several key conclusions can be drawn from our analyses of these 412 experiments: First, the field seems highly skewed towards confirmatory, as opposed to disconfirmatory, evidence which might explain the failure to exclude theories and converge on an accepted, or at least widely favored, account. This effect is relatively stable over time. Second, theory-driven studies, aimed at testing the predictions of the theories, are rather scarce, and even rarer are studies testing more than one theory, or pitting theories against each other – only 7% of the experiments directly compared two or more theories’ predictions. Though there seems to be an increasing number of experiments that test predictions a-priori in recent years, a large number of studies continue to interpret their findings post-hoc in light of the theories. Third, a close
relation was found between methodological choices made by researchers and the theoretical interpretations of their findings. That is, based only on some methodological choices of the researchers (e.g., using report vs. no-report paradigms, or studying content vs. state consciousness), we could predict if the experiment will end up supporting each of the theories.


Editor's note: Consistent with other forms of confirmation bias: the design of the experiment largely determines its result.  Consciousness remains a mystery, and in the eye of the scientific beholder.

Saturday, February 2, 2019

A systematic review of therapist effects: A critical narrative update and refinement to Baldwin and Imel's (2013) review

Robert G. Johns, Michael Barkham, Stephen Kellett, & David Saxon.
Clinical Psychology Review
Volume 67, February 2019, Pages 78-93

Abstract

Objective
To review the therapist effects literature since Baldwin and Imel's (2013) review.

Method
Systematic literature review of three databases (PsycINFO, PubMed and Web of Science) replicating Baldwin and Imel (2013) search terms. Weighted averages of therapist effects (TEs) were calculated, and a critical narrative review of included studies conducted.

Results
Twenty studies met inclusion criteria (3 RCTs; 17 practice-based) with 19 studies using multilevel modeling. TEs were found in 19 studies. The TE range for all studies was 0.2% to 29% (weighted average = 5%). For RCTs, 1%–29% (weighted average = 8.2%). For practice-based studies, 0.2–21% (weighted average = 5%). The university counseling subsample yielded a lower TE (2.4%) than in other groupings (i.e., primary care, mixed clinical settings, and specialist/focused settings). Therapist sample sizes remained lower than recommended, and few studies appeared to be designed specifically as TE studies, with too few examples of maximising the research potential of large routine patient datasets.

Conclusions
Therapist effects are a robust phenomenon although considerable heterogeneity exists across studies. Patient severity appeared related to TE size. TEs from RCTs were highly variable. Using an overall therapist effects statistic may lack precision, and TEs might be better reported separately for specific clinical settings.