Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label replication. Show all posts
Showing posts with label replication. Show all posts

Thursday, October 15, 2015

More Doubts Over The Oxytocin And Trust Theory

By Neuroskeptic
Originally published on September 16, 2015

The claim that the hormone oxytocin promotes trust in humans has drawn a lot of attention. But today, a group of researchers reported that they’ve been unable to reproduce their own findings concerning that effect.

The new paper, in PLoS ONE, is by Anthony Lane and colleagues from Louvain in Belgium. The same team have previously published evidence supporting the link between oxytocin and trust.

Back in 2010 they reported that “oxytocin increases trust when confidential information is in the balance”. An intranasal spray of oxytocin made volunteers more likely to leave a sensitive personal document lying around in an open envelope, rather than sealing it up, suggesting that they trusted people not to peek at it.

However, the authors now say that they failed to replicate the 2010 ‘envelope task’ result in two subsequent studies.

The entire blog post is here.

Tuesday, May 19, 2015

Replication, falsification, and the crisis of confidence in social psychology

By Brian D. Earp & David Trafimow
Front. Psychol. | doi: 10.3389/fpsyg.2015.00621

Abstract

The (latest) “crisis in confidence” in social psychology has generated much heated discussion about the importance of replication, including how such replication should be carried out as well as interpreted by scholars in the field. What does it mean if a replication attempt “fails”—does it mean that the original results, or the theory that predicted them, have been falsified? And how should “failed” replications affect our belief in the validity of the original research? In this paper, we consider the “replication” debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we introduce a Bayesian framework for assessing “failed” replications in terms of how they should affect our confidence in purported findings.

The entire article is here.

Saturday, April 11, 2015

Amid a Sea of False Findings, the NIH Tries Reform; Science needs to get its house in order, says Francis Collins, director of the NIH

By Paul Voosen
Chronicle of Higher Education
Originally published March 16, 2015

How do you change an entire scientific culture?

It may sound grandiose, but that is the loaded question now facing the National Institutes of Health, the federal agency that oversees and finances U.S. biomedical research.

While the public remains relatively unaware of the problem, it is now a truism in the scientific establishment that many preclinical biomedical studies, when subjected to additional scrutiny, turn out to be false.

Many researchers believe that if scientists set out to reproduce preclinical work published over the past decade, a majority would fail.

(cut)

The NIH, if it was at first reluctant to consider the problem, is now taking it seriously. Just over a year ago, the agency's director, Francis S. Collins, and his chief deputy, Lawrence A. Tabak, announced actions the agency would take to improve the research it finances.

Science needs to get its house in order, Dr. Collins said in a recent interview with The Chronicle.

The entire article is here.

Friday, December 12, 2014

Culture Of Psychological Science At Stake

By Tania Lombrozo
NPR Cosmos and Culture
Originally published November 18, 2014

In a video released today at Edge.org, psychologist Simone Schnall raises interesting questions about the role of replication in social psychology and about what counts as "admissible evidence" in science.

Schnall comes at the topic from recent experience: One of her studies was selected for a replication attempt by a registered replication project, and the replication failed to find the effect from her original study.

An occasional failure to replicate isn't too surprising or disruptive to the field — what makes Schnall's case somewhat unique is the discussion that ensued, which occurred largely on blogs and social media. And it got ugly.

The entire NPR article is here.

Dr. Schnall's Edge Video is here.


Wednesday, July 30, 2014

Corruption of Peer Review Is Harming Scientific Credibility

By Hank Campbell
The Wall Street Journal
Originally published July 13, 2013

Academic publishing was rocked by the news on July 8 that a company called Sage Publications is retracting 60 papers from its Journal of Vibration and Control, about the science of acoustics. The company said a researcher in Taiwan and others had exploited peer review so that certain papers were sure to get a positive review for placement in the journal. In one case, a paper's author gave glowing reviews to his own work using phony names.

Acoustics is an important field. But in biomedicine faulty research and a dubious peer-review process can have life-or-death consequences. In June, Dr. Francis Collins, director of the National Institutes of Health and responsible for $30 billion in annual government-funded research, held a meeting to discuss ways to ensure that more published scientific studies and results are accurate.

The entire article is here.

Friday, July 11, 2014

Replication Crisis in Psychology Research Turns Ugly and Odd

By Tom Bartlett
The Chronicle of Higher Education
Originally published June 23, 2014

Another salvo was fired recently in what's become known...as "repligate."

In a blog post published last week, Timothy D. Wilson, a professor of psychology at the University of Virginia and the author of Redirect: The Surprising New Science of Psychological Change, declared that "the field has become preoccupied with prevention and error detection--negative psychology--at the expense of exploration and discovery."

The evidence that psychology is beset with false positives is weak, according to Mr. Wilson, and he pointed instead to the danger of inept replications that serve only to damage "the reputation of the original researcher and the progression of science."

While he called for finding common ground, Mr. Wilson pretty firmly sided with those who fear that psychology's growing replication movement, which aims to challenge what some critics see as a tsunami of suspicious science, is more destructive than corrective.

Still, Mr. Wilson was polite. Daniel Gilbert, less so.

The entire article is here.

Thursday, July 10, 2014

The Tragedy of Moral Licensing

A non-replication that threatens the public trust in psychology

By Rolf Degen
Google+ page
Shared publicly on May 20, 2014

Moral licensing is one of the most influential psychological effects discovered in the last decade. It refers to our increased tendency to act immorally if we have already displayed our moral righteousness. In essence, it means, that after you have done something nice, you think you have the license to do something not so nice. The effect was immediately picked up by all new psychological textbooks, portrayed repeatedly in the media, and it even got its own Wikipedia page (Do we have to take that one down?).

The entire Google+ essay is here.

Friday, February 21, 2014

Science Faction: Why Most Scientific Research Results are Wrong

Bloggingheads.tv
John Horgan and George Johnson discuss issues related to science

Why most scientific research results are wrong?

Is competition making fudged data more likely?

Science is not a triumphal march

Can academic publishing be reformed?

Essential and inessential skills for young science writers

The Big Bang and the case against falsifiability


Monday, December 16, 2013

Psychologists strike a blow for reproducibility

By Ed Yong
Nature
Originally published November 26, 2013

A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced.

Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people.

The entire story is here.

It's time for psychologists to put their house in order

By Keith Laws
The Guardian
Originally published February 27, 2013

Here is an excerpt:

Psychologists find significant statistical support for their hypotheses more frequently than any other science, and this is not a new phenomenon. More than 30 years ago, it was reported that psychology researchers are eight times as likely to submit manuscripts for publication when the results are positive rather than negative.

Unpublished, "failed" replications and negative findings stay in the file-drawer and therefore remain unknown to future investigators, who may independently replicate the null-finding (each also unpublished) - until by chance, a spuriously significant effect turns up.

It is this study that is published. Such findings typically emerge with large effect sizes (usually being tested with small samples), and then shrivel as time passes and replications fail to document the purported phenomenon. If the unreliability of the effect is eventually recognised, it occurs with little fanfare.

The entire story is here.

Friday, July 26, 2013

Low Hopes, High Expectations: Expectancy Effects and the Replicability of Behavioral Experiments

By Olivier Klein and others
Perspectives on Psychological Science 7(6) 572–584
DOI: 10.1177/1745691612463704
http://pps.sagepub.com

This article revisits two classical issues in experimental methodology: experimenter bias and demand characteristics. We report a content analysis of the method section of experiments reported in two psychology journals (Psychological Science and the Journal of Personality and Social Psychology), focusing on aspects of the procedure associated with these two phenomena, such as mention of the presence of the experimenter, suspicion probing, and handling of deception. We note that such information is very often absent, which prevents observers from gauging the extent to which such factors influence the results. We consider the reasons that may explain this omission, including the automatization of psychology experiments, the evolution of research topics, and, most important, a view of research participants as passive receptacles of stimuli. Using a situated social cognition perspective, we emphasize the importance of integrating the social context of experiments in the explanation of psychological phenomena. We illustrate this argument via a controversy on stereotype-based behavioral
priming effects.

The entire article is here.

Tuesday, April 23, 2013

Most brain science papers are neurotrash

By Andrew Orlowski
The Register
Originally published April 12, 2013

A group of academics from Oxford, Stanford, Virginia and Bristol universities have looked at a range of subfields of neuroscience and concluded that most of the results are statistically worthless.

The researchers found that most structural and volumetric MRI studies are very small and have minimal power to detect differences between compared groups (for example, healthy people versus those with mental health diseases). Their paper also stated that, specifically, a clear excess of "significance bias" (too many results deemed statistically significant) has been demonstrated in studies of brain volume abnormalities, and similar problems appear to exist in fMRI studies of the blood-oxygen-level-dependent response.

The team, researchers at Stanford Medical School, Virginia, Bristol and the Human Genetics dept at Oxford, looked at 246 neuroscience articles published in 2011 and and excluded papers where the test data was unavailable. They found that the papers' median statistical power - the possibility that a study will identify an effect when there is an effect there to be found - was just 21 per cent. What that means in practice is that if you were to run one of the experiments five times, you’d only find the effect once.

A further survey of papers drawn from fMRI brain scanners - and studies using such scanners have long filled the popular media with dramatic claims - found that their statistical power was just 8 per cent.

The entire story is here.

Thanks to Tom Fink for this story.

Monday, March 11, 2013

It's time for psychologists to put their house in order

BMC Psychology pledges 'to put less emphasis on interest levels' and publish repeat studies and negative results

By Keith Laws
The Guardian, Notes & Theories
Originally published February 27, 2013

In 2005, the epidemiologist John Ioannidis provocatively claimed that "most published research findings are false". In the field of psychology – where negative results rarely see the light of day – we have a related problem: there is the very real possibility that many unpublished, negative findings are true.

Psychologists have an aversion to some essential aspects of science that they perceive to be unexciting or less valuable. Historically, the discipline has done almost nothing to ensure the reliability of findings through the publication of repeat studies and negative ("null") findings.

Psychologists find significant statistical support for their hypotheses more frequently than any other science, and this is not a new phenomenon. More than 30 years ago, it was reported that psychology researchers are eight times as likely to submit manuscripts for publication when the results are positive rather than negative.

Unpublished, "failed" replications and negative findings stay in the file-drawer and therefore remain unknown to future investigators, who may independently replicate the null-finding (each also unpublished) - until by chance, a spuriously significant effect turns up.

The entire story is here.

Wednesday, April 11, 2012

Can Most Cancer Research Be Trusted?

Addressing the problem of "academic risk" in biomedical research

By Ronald Bailey
reason.com
Originally published April 3, 2012

When a cancer study is published in a prestigious peer-reviewed journal, the implcation is the findings are robust, replicable, and point the way toward eventual treatments. Consequently, researchers scour their colleagues' work for clues about promising avenues to explore. Doctors pore over the pages, dreaming of new therapies coming down the pike. Which makes a new finding that nine out of 10 preclinical peer-reviewed cancer research studies cannot be replicated all the more shocking and discouraging.

Last week, the scientific journal Nature published a disturbing commentary claiming that in the area of preclinical research—which involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in people—independent researchers doing the same experiment cannot get the same result as reported in the scientific literature.

The entire commentary is here.

Thanks to Rich Ievoli for the story.  He could have been a contender.