Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Bias. Show all posts
Showing posts with label Bias. Show all posts

Thursday, September 29, 2016

How Curiosity Can Protect the Mind from Bias

By Tom Stafford
bbc.com
Originally published 8 September 2016

Here is an excerpt:

The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative.

And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations.

So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts.

The article is here.

Sunday, September 18, 2016

Forgive Us Our Trespasses: Priming a Forgiving (But Not a Punishing) God Increases Unethical Behavior

Amber E DeBono, Azim Shariff, Sarah Poole, and Mark Muraven
Psychology of Religion and Spirituality · December 2016

Abstract

Religious people differ in how punishing or forgiving they see their Gods. Such different beliefs may have distinct consequences in encouraging people to act in normative ways. Though a number of priming studies have shown a positive causal relationship between religion and normative behavior, few have primed different aspects of religion, and none has examined the punishing/forgiving dimension. In three experiments, Christians instructed to read and write about a forgiving God stole more money (Experiments 1 and 2) and cheated more on a math assignment (Experiment 3) than those who read and wrote about a punishing God, a forgiving human, a punishing human, or those in a control condition. These studies present a more complex and nuanced picture of the important relationship between religion and normative behavior.

The article is here.

Tuesday, September 6, 2016

The Problem With Slow Motion

By Eugene Caruso, Zachary Burns & Benjamin Converse
The New York Times - Gray Matter
Originally published August 5, 2016

Here are two excerpts:

Watching slow-motion footage of an event can certainly improve our judgment of what happened. But can it also impair judgment?

(cut)

Those who saw the shooting in slow motion felt that the actor had more time to act than those who saw it at regular speed — and the more time they felt he had, the more likely they were to see intention in his action. (We found similar results in a separate study involving video footage of a prohibited “helmet to helmet” tackle in the National Football League, where the question was whether the player intended to strike the opposing player in the proscribed manner.)

The article is here.

Sunday, August 28, 2016

What Is Happening to Our Country? How Psychology Can Respond to Political Polarization, Incivility and Intolerance



As political events in Europe and America got stranger and more violent over the last year, I found myself thinking of the phrase “things fall apart; the center cannot hold.” I didn’t know its origin so I looked it up, found the poem The Second Coming, by W. B. Yeats, and found a great deal of wisdom. Yeats wrote it in 1919, just after the First World War and at the beginning of the Irish War of Independence.

The entire web page is here.

Tuesday, July 5, 2016

How scientists fool themselves – and how they can stop

Regina Nuzzo
Nature 526, 182–185 (08 October 2015)
doi:10.1038/526182a

Here is an excerpt:

This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.

The article is here.

Editor's note: These biases also apply to clinicians who use research or their own theories about how and why psychotherapy works.

Monday, June 27, 2016

In treating obese patients, too often doctors can’t see past weight

By Jennifer Adaeze Okwerkwu @JenniferAdaeze
STAT
Originally published June 3, 2016

Here is an excerpt:

An earlier survey of primary care physicians and cardiologists showed a similar pattern. Though heart disease is the leading cause of death among women, the study found only 39 percent of physicians were “extremely concerned” about this issue, whereas 48 percent of physicians were “extremely concerned” about women’s weight.

“We haven’t really thought about this before” but we need to explore the issue “because women are dying,” said study leader Dr. Noel Bairey Merz, medical director of the Barbra Streisand Women’s Heart Center at Cedars-Sinai Heart Institute.

It’s not just heart disease. Another study has found that other types of preventative care, including breast exams and pap smears, are often delayed by obese women. While obesity is associated with a variety of health conditions, if the medical profession fails to provide a safe space for patient care, these missed opportunities for intervention may be partly to blame.

The article is here.

Saturday, June 4, 2016

Scientists show how we start stereotyping the moment we see a face

Sarah Kaplan
The Independent
Originally posted May 2, 2016

Scientists have known for a while that stereotypes warp our perceptions of things. Implicit biases — those unconscious assumptions that worm their way into our brains, without our full awareness and sometimes against our better judgment — can influence grading choices from teachers, split-second decisions by police officers and outcomes in online dating.

We can't even see the world without filtering it through the lens of our assumptions, scientists say. In a study published Monday in the journal Nature Neuroscience, psychologists report that the neurons that respond to things such as sex, race and emotion are linked by stereotypes, distorting the way we perceive people's faces before that visual information even reaches our conscious brains.

The article is here.

Wednesday, May 11, 2016

Procedural Moral Enhancement

G. Owen Schaefer and Julian Savulescu
Neuroethics  pp 1-12
First online: 20 April 2016

Abstract

While philosophers are often concerned with the conditions for moral knowledge or justification, in practice something arguably less demanding is just as, if not more, important – reliably making correct moral judgments. Judges and juries should hand down fair sentences, government officials should decide on just laws, members of ethics committees should make sound recommendations, and so on. We want such agents, more often than not and as often as possible, to make the right decisions. The purpose of this paper is to propose a method of enhancing the moral reliability of such agents. In particular, we advocate for a procedural approach; certain internal processes generally contribute to people’s moral reliability. Building on the early work of Rawls, we identify several particular factors related to moral reasoning that are specific enough to be the target of practical intervention: logical competence, conceptual understanding, empirical competence, openness, empathy and bias. Improving on these processes can in turn make people more morally reliable in a variety of contexts and has implications for recent debates over moral enhancement.

Monday, May 9, 2016

What I Learned From Tickling Apes

By Franz de Waal
New York Times Sunday Review
Originally posted on April 8, 2016

Here is an excerpt:

One reason this whole debate is as heated as it is relates to its moral implications. When our ancestors moved from hunting to farming, they lost respect for animals and began to look at themselves as the rulers of nature. In order to justify how they treated other species, they had to play down their intelligence and deny them a soul. It is impossible to reverse this trend without raising questions about human attitudes and practices. We can see this process underway in the halting of biomedical research on chimpanzees and the opposition to the use of killer whales for entertainment.

Increased respect for animal intelligence also has consequences for cognitive science. For too long, we have left the human intellect dangling in empty evolutionary space. How could our species arrive at planning, empathy, consciousness and so on, if we are part of a natural world devoid of any and all steppingstones to such capacities? Wouldn’t this be about as unlikely as us being the only primates with wings?

The article is here.

Wednesday, April 13, 2016

Stereotype Threat, Epistemic Injustice, and Rationality

Stacey Goguen
Draft, forthcoming (2016) in Brownstein and Saul (eds), Implicit Bias and Philosophy, Vol I,
Oxford University Press.

Stereotype threat is most well-known for its ability to hinder performance. However, it actually has a  wide range of effects. For instance, it can also cause stress, anxiety, and self-doubt. These additional effects are as important and as central to the phenomenon as its effects on performance are. As a result, stereotype threat has more far-reaching implications than many philosophers have realized. In particular, the phenomenon has a number of unexplored “epistemic effects.

These are effects on our epistemic lives — i.e., the ways we engage with the world as actual and potential knowers. In this paper I flesh out the implications of a specific epistemic effect: self-doubt. Certain kinds of self-doubt can deeply affect our epistemic lives by exacerbating moments of epistemic injustice and by perniciously interacting with ideals of rationality. In both cases, self-doubt can lead to one questioning one’s own humanity or full personhood. Because stereotype threat can trigger this kind of self-doubt, it can affect various aspects of ourselves besides our ability to perform to our potential. It can also affect our very sense of self. In this paper, I argue that we should adopt a more comprehensive account of stereotype threat that explicitly acknowledges all of the known effects of the phenomenon. Doing so will allow us to better investigate the epistemological implications of stereotype threat, as well as the full extent of its reach into our lives. I focus on fleshing out stereotype threat’s effect of self-doubt, and how this effect can influence the very foundations of our epistemic lives. I do this by arguing that self-doubt from stereotype threat can constitute an epistemic injustice, and that this sort of self-doubt can be exacerbated by stereotypes of irrationality. As a result, self-doubt from stereotype threat can erode our faith in ourselves as full human persons and as rational, reliable knowers.

The full text is here.

Tuesday, April 12, 2016

Rationalization in Moral and Philosophical Thought

Eric Schwitzgebel and Jonathan Ellis

Abstract

Rationalization, in our intended sense of the term, occurs when a person favors a particular conclusion as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, if that factor then biases the person’s subsequent search for, and assessment of, potential justifications for the conclusion.  Empirical evidence suggests that rationalization is common in ordinary people’s moral and philosophical thought.  We argue that it is likely that the moral and philosophical thought of philosophers and moral psychologists is also pervaded by rationalization.  Moreover, although rationalization has some benefits, overall it would be epistemically better if the moral and philosophical reasoning of both ordinary people and professional academics were not as heavily influenced by rationalization as it likely is.  We discuss the significance of our arguments for cognitive management and epistemic responsibility.

The paper is here.

Sunday, March 6, 2016

The Unbearable Asymmetry of Bullshit

By Brian Earp
BMJ Blogs
Originally posted February 16, 2016

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

The blog post is here.

Tuesday, January 19, 2016

Researcher allegiance in psychotherapy outcome research: an overview of reviews

Munder T, Brütsch O, Leonhart R, Gerger H, Barth J.
Clinical Psychology Review
Volume 33, Issue 4, June 2013, Pages 501–511

Abstract

Researcher allegiance (RA) is widely discussed as a risk of bias in psychotherapy outcome research. The relevance attached to RA bias is related to meta-analyses demonstrating an association of RA with treatment effects. However, recent meta-analyses have yielded mixed results. To provide more clarity on the magnitude and robustness of the RA-outcome association this article reports on a meta-meta-analysis summarizing all available meta-analytic estimates of the RA-outcome association. Random-effects methods were used. Primary study overlap was controlled. Thirty meta-analyses were included. The mean RA-outcome association was r = .262 (p = .002, I2 = 28.98%), corresponding to a moderate effect size. The RA-outcome association was robust across several moderating variables including characteristics of treatment, population, and the type of RA assessment. Allegiance towards the RA bias hypothesis moderated the RA-outcome association. The findings of this meta-meta-analysis suggest that the RA-outcome association is substantial and robust. Implications for psychotherapy outcome research are discussed.

The entire article is here.

Monday, October 26, 2015

Researchers can change the outcome of studies just by being white

By Nikhil Sonnad
Quartz
Originally posted October 5, 2015

Here is an excerpt:

The implication is that every aspect of a study matters. Decision research has been criticized for attempting to explain all of human behavior based mainly on studies of undergraduates in rich democracies. That has led to repeating such research in other parts of the world, as the chart above shows. But that might not be enough.

“Behavioral studies that offer ‘cultural’ or other contextual explanations for variation in generosity should be taken with a grain of salt, unless we are confident that such differences aren’t driven by simpler explanations such as who was in the room at the time,” said Bilal Murtaza Siddiqi, an economist at the World Bank and one of the paper’s co-authors.

The entire article is here.

Friday, October 16, 2015

The Dark Side of Empathy

By Paul Bloom
The Atlantic
Originally published on September 25, 2015

Here is an excerpt:

Our reaction to these atrocities can cloud our judgment, biasing us in favor of war. The benefits of war—including avenging those who have suffered—are made vivid, but the costs of war remain abstract and statistical. We see this same bias reflected in our criminal-justice system. The outrage that comes from empathy drives some of our most powerful punitive desires. It’s not an accident that so many statutes are named for dead girls—as in Megan’s Law, Jessica’s Law, and Caylee’s Law—and no surprise that there is now enthusiasm for “Kate’s Law.” The high incarceration rate in the United States, and our continued enthusiasm for the death penalty, is in part the product of fear and anger, but is also driven by the consumption of detailed stories of victims’ suffering.

Then there are victim-impact statements, where detailed descriptions of how victims are affected by a crime are used to help determine the sentence imposed on a criminal. There are arguments in favor of these statements, but given all the evidence that we are more prone to empathize with some individuals over others—with factors like race, sex, and physical attractiveness playing a powerful role—it’s hard to think of a more biased and unfair way to determine punishment.

The entire article is here.

Thursday, October 1, 2015

Peer review: a flawed process at the heart of science and journals

By Richard Smith
J R Soc Med. 2006 Apr; 99(4): 178–182.
doi:  10.1258/jrsm.99.4.178

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)

The entire article is here.

Tuesday, August 4, 2015

Psychologists are known for being liberal, but why?

By Elliot Berkman
The Conversation
Originally published July 14, 2015

Is the field of social psychology biased against political conservatives? There has been intense debate about this question since an informal poll of over 1,000 attendees at a social psychology meeting in 2011 revealed the group to be overwhelmingly liberal.

Formal surveys have produced similar results, showing the ratio of liberals to conservatives in the broader field of psychology is 14-to-1.

Since then, social psychologists have tried to figure out why this imbalance exists.

The primary explanation offered is that the field has an anticonservative bias. I have no doubt that this bias exists, but it’s not strong enough to push people who lean conservative out of the field at the rate they appear to be leaving.

I believe that a less prominent explanation is more compelling: learning about social psychology can make you more liberal. I know about this possibility because it is exactly what happened to me.

The entire article is here.

Monday, June 8, 2015

Death Denial

By Marc Parry
The Chronicle of Higher Education
Originally published May 22, 2015

Here is an excerpt:

The terror trio’s conclusion: People react differently to conscious and unconscious thoughts of death. While thinking about death directly, Pyszczynski says, folks do rational things to get away from it, like trying to get healthy. It’s when death lurks on the fringes of consciousness that they cling to worldviews and seek self-esteem. "That helps explain why these ideas might seem strange to some people," says Pyszczynski, a professor at the University of Colorado at Colorado Springs. "You can’t really introspect on it. While you’re thinking about death, this isn’t what you do."

Pyszczynski, Solomon, and Greenberg published their work consistently in the prestigious Journal of Personality and Social Psychology. But early on, as Greenberg tells it, "its main impact was to get us ostracized by the rest of the field of social psychology." Part of that was due to the disconcerting subject matter. Colleagues referred to them as "the death guys."

The entire article is here.

Tuesday, March 17, 2015

Straight Talk for White Men

By Nicholas Kristof
The New York Times
Originally published on February 21, 2015

Here is an excerpt:

The study found that a résumé with a name like Emily or Greg received 50 percent more callbacks than the same résumé with a name like Lakisha or Jamal. Having a white-sounding name was as beneficial as eight years’ work experience.

Then there was the study in which researchers asked professors to evaluate the summary of a supposed applicant for a post as laboratory manager, but, in some cases, the applicant was named John and in others Jennifer. Everything else was the same.

“John” was rated an average of 4.0 on a 7-point scale for competence, “Jennifer” a 3.3. When asked to propose an annual starting salary for the applicant, the professors suggested on average a salary for “John” almost $4,000 higher than for “Jennifer.”

It’s not that we white men are intentionally doing anything wrong, but we do have a penchant for obliviousness about the way we are beneficiaries of systematic unfairness. Maybe that’s because in a race, it’s easy not to notice a tailwind, and white men often go through life with a tailwind, while women and people of color must push against a headwind.

The entire article is here.

Friday, March 13, 2015

Bias, Black Lives, and Academic Medicine

By David A. Ansell and Edwin K. McDonald
The New England Journal of Medicine
Originally published February 18, 2015

Here is an excerpt:

First, there is evidence that doctors hold stereotypes based on patients' race that can influence their clinical decisions.  Implicit bias refers to unconscious racial stereotypes that grow from our personal and cultural experiences. These implicit beliefs may also stem from a lack of day-to-day interracial and intercultural interactions. Although explicit race bias is rare among physicians, an unconscious preference for whites as compared with blacks is commonly revealed on tests of implicit bias.

Second, despite physicians' and medical centers' best intentions of being equitable, black–white disparities persist in patient outcomes, medical education, and faculty recruitment.

The entire article is here.