Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Judgment. Show all posts
Showing posts with label Judgment. Show all posts

Friday, February 28, 2020

Slow response times undermine trust in algorithmic (but not human) predictions

E Efendic, P van de Calseyde, & A Evans
PsyArXiv PrePrints
Lasted Edited 22 Jan 20

Abstract

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.

General discussion 

When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions. 

We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality. 

The research is here.

Friday, February 14, 2020

Judgment and Decision Making

Baruch Fischhoff and Stephen B. Broomell
Annual Review of Psychology 
2020 71:1, 331-355

Abstract

The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.

The paper can be downloaded here.

Sunday, February 2, 2020

Empirical Work in Moral Psychology

 Joshua May
Routledge Encyclopedia of Philosophy

How do we form our moral judgments, and how do they influence behavior? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning, and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will; as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave.

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as a tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The paper can be downloaded here.

Thursday, January 16, 2020

Inaccurate group meta-perceptions drive negative out-group attributions in competitive contexts

Lees, J., Cikara, M.
Nat Hum Behav (2019)

Abstract

Across seven experiments and one survey (N=4282) people consistently overestimated out-group negativity towards the collective behavior of their in-group. This negativity bias in group meta-perception was present across multiple competitive (but not cooperative) intergroup contexts, and appears to be yoked to group psychology more generally; we observed negativity bias for estimation of out-group, anonymized-group, and even fellow in-group members’ perceptions. Importantly, in the context of American politics greater inaccuracy was associated with increased belief that the out-group is motivated by purposeful obstructionism. However, an intervention that informed participants of the inaccuracy of their beliefs reduced negative out-group attributions, and was more effective for those whose group meta-perceptions were more inaccurate. In sum, we highlight a pernicious bias in social judgments of how we believe ‘they’ see ‘our’ behavior, demonstrate how such inaccurate beliefs can exacerbate intergroup conflict, and provide an avenue for reducing the negative effects of inaccuracy.

From the Discussion

Our findings highlight a consistent, pernicious inaccuracy in social perception, along withhow these inaccurate perceptions relate to negative attributions towards out-groups. More broadly, inaccurate and overly negative GMPs exist across multiple competitive intergroup contexts, and we find no evidence they differ across the political spectrum. This suggests that there may be many domains of intergroup interaction where inaccurate GMPs could potentially diminish the likelihood of cooperation and instead exacerbate the possibility of conflict. However, our findings also highlight a straight-forward manner in which simply informing individuals of their inaccurate beliefs can reduce these negative attributions.

A version of the research can be downloaded here.

Friday, January 10, 2020

Ethically Adrift: How Others Pull Our Moral Compass from True North, and How we Can Fix It

Moore, C., and F. Gino.
Research in Organizational Behavior 
33 (2013): 53–77.

Abstract

This chapter is about the social nature of morality. Using the metaphor of the moral compass to describe individuals' inner sense of right and wrong, we offer a framework to help us understand social reasons why our moral compass can come under others' control, leading even good people to cross ethical boundaries. Departing from prior work focusing on the role of individuals' cognitive limitations in explaining unethical behavior, we focus on the socio-psychological processes that function as triggers of moral neglect, moral justification and immoral action, and their impact on moral behavior. In addition, our framework discusses organizational factors that exacerbate the detrimental effects of each trigger. We conclude by discussing implications and recommendations for organizational scholars to take a more integrative approach to developing and evaluating theory about unethical behavior.

From the Summary

Even when individuals are aware of the ethical dimensions of the choices they are making, they may still engage in unethical behavior as long as they recruit justifications for it. In this section, we discussed the role of two social–psychological processes – social comparison and self-verification – that facilitate moral justification, which will lead to immoral behavior. We also discussed three characteristics of organizational life that amplify these social–psychological processes. Specifically, we discussed how organizational identification, group loyalty, and framing or euphemistic language can all affect the likelihood and extent to which individuals justify their actions, by judging them as ethical when in fact they are morally contentious. Finally, we discussed moral disengagement, moral hypocrisy, and moral licensing as intrapersonal consequences of these social facilitators of moral justification.

The paper can be downloaded here.

Saturday, October 19, 2019

Forensic Clinicians’ Understanding of Bias

Tess Neal, Nina MacLean, Robert D. Morgan,
and Daniel C. Murrie
Psychology, Public Policy, and Law, 
Sep 16 , 2019, No Pagination Specified

Abstract:

Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.

Here is the conclusion:

These findings (along with Neal & Brodsky’s, 2016) suggest that forensic clinicians are in need of additional training not only to recognize biases but perhaps to begin to effectively mitigate harm from biases. For example, in predoctoral (e.g., internship) and postdoctoral (fellowships), didactic training could address bias, recognizing bias and providing strategies for minimizing bias. Additionally, supervisors could address identifying and reducing bias as a regular part of supervision (e.g., by including this as part of case conceptualization). However, further research is needed to determine the types of training and workflow strategies that best reduce bias. Future studies should focus on experimentally examining the presence of biases and ways to mitigate their effects in forensic evaluations.

The research is here.

Thursday, September 12, 2019

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.

Thursday, June 20, 2019

Moral Judgment Toward Relationship Betrayals and Those Who Commit Them

Dylan Selterman Amy Moors Sena Koleva
PsyArXiv
Created on January 18, 2019

Abstract

In three experimental studies (total N = 1,056), we examined moral judgments toward relationship betrayals, and how these judgments depended on whether characters and their actions were perceived to be pure and loyal compared to the level of harm caused. In Studies 1 and 2 the focus was confessing a betrayal, while in Study 3 the focus was on the act of sexual infidelity. Perceptions of harm/care were inconsistently and less strongly associated with moral judgment toward the behavior or the character, relative to perceptions of purity and loyalty, which emerged as key predictors of moral judgment across all studies. Our findings demonstrate that a diversity of cognitive factors play a key role in moral perception of relationship betrayals.

Here is part of the Discussion:

Some researchers have argued that perception of a harmed victim is the cognitive prototype by which people conceptualize immoral behavior (Gray et al.,2014).This perspective explains many phenomena within moral psychology.  However, other psychological templates may apply regarding sexual and relational behavior, and that purity and loyalty play a key role in explaining how people arrive at moral judgments toward sexual and relational violations. In conclusion, the current research adds to ongoing and fruitful research regarding the underlying psychological mechanisms involved in moral judgment. Importantly, the current studies extend our knowledge of moral judgments into the context of specific close relationship and sexual contexts that many people experience.

The research is here.

Thursday, May 30, 2019

Confronting bias in judging: A framework for addressing psychological biases in decision making

Tom Stafford, Jules Holroyd, & Robin Scaife
PsyArXiv
Last edited on December 24, 2018

Abstract

Cognitive biases are systematic tendencies of thought which undermine accurate or fair reasoning. An allied concept is that of ‘implicit bias’, which are biases directed at members of particular social identities which may manifest without individual’s endorsement or awareness. This article reviews the literature on cognitive bias, broadly conceived, and makes proposals for how judges might usefully think about avoiding bias in their decision making. Contra some portrayals of cognitive bias as ‘unconscious’ or unknowable, we contend that things can be known about our psychological biases, and steps taken to address them. We argue for the benefits of a unified treatment of cognitive and implicit biases and propose a “3 by 3” framework which can be used by individuals and institutions to review their practice with respect to addressing bias. We emphasise that addressing bias requires an ongoing commitment to monitoring, evaluation and review rather than one­-off interventions.

The research is here.

Wednesday, May 29, 2019

Why Do We Need Wisdom To Lead In The Future?

Sesil Pir
Forbes.com
Originally posted May 19, 2019

Here is an excerpt:

We live in a society that encourages us to think about how to have a great career but leaves us inarticulate about how to cultivate the inner life. The road to success is definitively paved through competition and so fiercely that it becomes all-consuming for many of us. It is commonly accepted today that information is the key source of all being; yet, information alone doesn’t laver one with knowledge as knowledge alone doesn’t lead to righteous action. In the age of artificial information, we need to consider beyond data to drive purposeful progression and authentic illuminations.

Wisdom in the context of leadership refers to our quality of having good, sound judgment. It is a source that provides light into our own insight and introduces a new appreciation for the world around us. It helps us recognize that others are more than our limiting impressions of them. It fills us with confidence that we are connected and better capable than we could ever dream of.

The people with this quality tends to lead from a place of strong internal cohesion. They have overcome fragmentation to reach a level of integration, which supports the way they show up – tranquil, settled and rooted. These people tend to withstand the hard winds of volatility and not easily crumble in the face of adversity. They ground their thoughts, emotions and behaviors in values that feed their self-efficacy and they heartfully understand perfectionism is an unattainable goal.

The info is here.

Sunday, March 10, 2019

Rethinking Medical Ethics

Insights Team
Forbes.com
Originally posted February 11, 2019

Here is an excerpt:

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.

Avoiding Bias

In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.

In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.

The info is here.

Saturday, January 26, 2019

People use less information than they think to make up their minds

Nadav Klein and Ed O’Brien
PNAS December 26, 2018 115 (52) 13222-13227

Abstract

A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.

Significance

People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Saturday, January 12, 2019

Monitoring Moral Virtue: When the Moral Transgressions of In-Group Members Are Judged More Severely

Karim Bettache, Takeshi Hamamura, J.A. Idrissi, R.G.J. Amenyogbo, & C. Chiu
Journal of Cross-Cultural Psychology
First Published December 5, 2018
https://doi.org/10.1177/0022022118814687

Abstract

Literature indicates that people tend to judge the moral transgressions committed by out-group members more severely than those of in-group members. However, these transgressions often conflate a moral transgression with some form of intergroup harm. There is little research examining in-group versus out-group transgressions of harmless offenses, which violate moral standards that bind people together (binding foundations). As these moral standards center around group cohesiveness, a transgression committed by an in-group member may be judged more severely. The current research presented Dutch Muslims (Study 1), American Christians (Study 2), and Indian Hindus (Study 3) with a set of fictitious stories depicting harmless and harmful moral transgressions. Consistent with our expectations, participants who strongly identified with their religious community judged harmless moral offenses committed by in-group members, relative to out-group members, more severely. In contrast, this effect was absent when participants judged harmful moral transgressions. We discuss the implications of these results.

Friday, January 4, 2019

The Objectivity Illusion in Medical Practice

Donald Redelmeier & Lee Ross
The Association for Psychological Science
Published November 2018

Insights into pitfalls in judgment and decision-making are essential for the practice of medicine. However, only the most exceptional physicians recognize their own personal biases and blind spots. More typically, they are like most humans in believing that they see objects, events, or issues “as they really are” and, accordingly, that others who see things differently are mistaken. This illusion of personal objectivity reflects the implicit conviction of a one-to-one correspondence between the perceived properties and the real nature of an object or event. For patients, such naïve realism means a world of red apples, loud sounds, and solid chairs. For practitioners, it means a world of red rashes, loud murmurs, and solid lymph nodes. However, a lymph node that feels normal to one physician may seem suspiciously enlarged and hard to another physician, with a resulting disagreement about the indications for a lymph node biopsy. A research study supporting a new drug or procedure may seem similarly convincing to one physician but flawed to another.

Convictions about whose perceptions are more closely attuned to reality can be a source of endless interpersonal friction. Spouses, for example, may disagree about appropriate thermostat settings, with one perceiving the room as too cold while the other finds the temperature just right. Moreover, each attributes the other’s perceptions to some pathology or idiosyncrasy.

The info is here.

Friday, December 28, 2018

The Theory of Dyadic Morality: Reinventing Moral Judgment by Redefining Harm

Chelsea Schein & Kurt Gray
Personality and Social Psychology Review
Volume: 22 issue: 1, page(s): 32-70
Article first published online: May 14, 2017; Issue published: February 1, 2018

Abstract

The nature of harm—and therefore moral judgment—may be misunderstood. Rather than an objective matter of reason, we argue that harm should be redefined as an intuitively perceived continuum. This redefinition provides a new understanding of moral content and mechanism—the constructionist Theory of Dyadic Morality (TDM). TDM suggests that acts are condemned proportional to three elements: norm violations, negative affect, and—importantly—perceived harm. This harm is dyadic, involving an intentional agent causing damage to a vulnerable patient (A→P). TDM predicts causal links both from harm to immorality (dyadic comparison) and from immorality to harm (dyadic completion). Together, these two processes make the “dyadic loop,” explaining moral acquisition and polarization. TDM argues against intuitive harmless wrongs and modular “foundations,” but embraces moral pluralism through varieties of values and the flexibility of perceived harm. Dyadic morality impacts understandings of moral character, moral emotion, and political/cultural differences, and provides research guidelines for moral psychology.

The review is here.

Friday, November 30, 2018

The Knobe Effect From the Perspective of Normative Orders

Andrzej Waleszczyński, Michał Obidziński, & Julia Rejewska
Studia Humana Volume 7:4 (2018), pp. 9—15

Abstract:

The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders.

The research is here.

Friday, November 2, 2018

Companies Tout Psychiatric Pharmacogenomic Testing, But Is It Ready for a Store Near You?

Jennifer Abbasi
JAMA Network
Originally posted October 3, 2018

Here is an excerpt:

According to Dan Dowd, PharmD, vice president of medical affairs at Genomind, pharmacists in participating stores can inform customers about the Genecept Assay if they notice a history of psychotropic drug switching or drug-related adverse effects. If the test is administered, a physician’s order is required for the company’s laboratory to process it.

“This certainly is a recipe for selling a whole lot more tests,” Potash said of the approach, adding that patients often feel “desperate” to find a successful treatment. “What percentage of the time selling these tests will result in better patient outcomes remains to be seen.”

Biernacka also had reservations about the in-store model. “Generally, it could be helpful for a pharmacist to tell a patient or their provider that perhaps the patient could benefit from pharmacogenetic testing,” she said. “[B]ut until the tests are more thoroughly assessed, the decision to pursue such an option (and with which test) should be left more to the treating clinician and patient.”

Some physicians said they’ve found pharmacogenomic testing to be useful. Aron Fast, MD, a family physician in Hesston, Kansas, uses GeneSight for patients with depression or anxiety who haven’t improved after trying 2 or 3 antidepressants. Each time, he said, his patients were less depressed or anxious after switching to a new drug based on their genotyping results.

Part of their improvements may stem from expecting the test to help, he acknowledged. The testing “raises confidence in the medication to be prescribed,” Müller explained, which might contribute to a placebo effect. However, Müller emphasized that the placebo effect alone is unlikely to explain lasting improvements in patients with moderate to severe depression. In his psychiatric consulting practice, pharmacogenomic-guided drug changes have led to improvements in patients “sometimes even up to the point where they’re completely remitted,” he said.

The info is here.

Health care, disease care, or killing care?

Hugo Caicedo
Harvard Blogs
Originally published October 1, 2018

Traditional medical practice is rooted in advanced knowledge of diseases, their most appropriate treatment, and adequate proficiency in its applied practice. Notably, today, medical treatment does not typically occur until disease symptoms have manifested. While we now have ways to develop therapies that can halt the progression of some symptomatic diseases, symptomatic solutions are not meant to serve as a cure of disease but palliative treatment of late-stage chronic diseases.

The reactive approach in most medical interventions is magnified in that medicine is prone to errors. In November of 1999, the U.S. National Academy of Science, an organization representing the most highly regarded scientists and physician researchers in the U.S., published the report To Err is Human.

The manuscript noted that medical error was a leading cause of patient deaths killing up to 98,000 people in the U.S. every year. One hypothesis that came up was that patient data was being poorly collected, aggregated, and shared among different hospitals and even within the same health system. Health policies such the Health Information Technology for Economic and Clinical Health Act (HITECH) in 2009 and the Affordable Care Act (ACA) in 2010, primarily focused on optimizing clinical and operational effectiveness through the use of health information technology and expansion of government insurance programs, respectively. However, they did not effectively address the issue of medical errors such as poor judgment, mistaken diagnoses, inadequately coordinated care, and incompetent skill that can directly result in patient harm and death.

The blog post is here.

Monday, August 6, 2018

False Equivalence: Are Liberals and Conservatives in the U.S. Equally “Biased”?

Jonathan Baron and John T. Jost
Invited Revision, Perspectives on Psychological Science.

Abstract

On the basis of a meta-analysis of 51 studies, Ditto, Liu, Clark, Wojcik, Chen, et al. (2018) conclude that ideological “bias” is equivalent on the left and right of U.S. politics. In this commentary, we contend that this conclusion does not follow from the review and that Ditto and colleagues are too quick to embrace a false equivalence between the liberal left and the conservative right. For one thing, the issues, procedures, and materials used in studies reviewed by Ditto and colleagues were selected for purposes other than the inspection of ideological asymmetries. Consequently, methodological choices made by researchers were systematically biased to avoid producing differences between liberals and conservatives. We also consider the broader implications of a normative analysis of judgment and decision-making and demonstrate that the “bias” examined by Ditto and colleagues is not, in fact, an irrational bias, and that it is incoherent to discuss bias in the absence of standards for assessing accuracy and consistency. We find that Jost’s (2017) conclusions about domain-general asymmetries in motivated social cognition, which suggest that epistemic virtues are more prevalent among liberals than conservatives, are closer to the truth of the matter when it comes to current American politics. Finally, we question the notion that the research literature in psychology is necessarily characterized by “liberal bias,” as several authors have claimed.

Here is the end:

 If academics are disproportionately liberal—in comparison with society at large—it just might
be due to the fact that being liberal in the early 21st century is more compatible with the epistemic standards, values, and practices of academia than is being conservative.

The article is here.

See Your Surgeon Is Probably a Republican, Your Psychiatrist Probably a Democrat as an other example.

Tuesday, June 26, 2018

Understanding unconscious bias

The Royal Society
Originally published November 17, 2015

This animation introduces the key concepts of unconscious bias.  It forms part of the Royal Society’s efforts to ensure that all those who serve on Royal Society selection and appointment panels are aware of differences in how candidates may present themselves, how to recognise bias in yourself and others, how to recognise inappropriate advocacy or unreasoned judgement. You can find out more about unconscious bias and download a briefing which includes current academic research at www.royalsociety.org/diversity.



A great three-minute video.