Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label "File Drawer" Effect. Show all posts
Showing posts with label "File Drawer" Effect. Show all posts

Monday, December 16, 2013

It's time for psychologists to put their house in order

By Keith Laws
The Guardian
Originally published February 27, 2013

Here is an excerpt:

Psychologists find significant statistical support for their hypotheses more frequently than any other science, and this is not a new phenomenon. More than 30 years ago, it was reported that psychology researchers are eight times as likely to submit manuscripts for publication when the results are positive rather than negative.

Unpublished, "failed" replications and negative findings stay in the file-drawer and therefore remain unknown to future investigators, who may independently replicate the null-finding (each also unpublished) - until by chance, a spuriously significant effect turns up.

It is this study that is published. Such findings typically emerge with large effect sizes (usually being tested with small samples), and then shrivel as time passes and replications fail to document the purported phenomenon. If the unreliability of the effect is eventually recognised, it occurs with little fanfare.

The entire story is here.

Tuesday, September 25, 2012

False positives: fraud and misconduct are threatening scientific research

High-profile cases and modern technology are putting scientific deceit under the microscope

By Alok Jha
The Guardian
Originally published September 13, 2012

Dirk Smeesters had spent several years of his career as a social psychologist at Erasmus University in Rotterdam studying how consumers behaved in different situations. Did colour have an effect on what they bought? How did death-related stories in the media affect how people picked products? And was it better to use supermodels in cosmetics adverts than average-looking women?

The questions are certainly intriguing, but unfortunately for anyone wanting truthful answers, some of Smeesters' work turned out to be fraudulent. The psychologist, who admitted "massaging" the data in some of his papers, resigned from his position in June after being investigated by his university, which had been tipped off by Uri Simonsohn from the University of Pennsylvania in Philadelphia. Simonsohn carried out an independent analysis of the data and was suspicious of how perfect many of Smeesters' results seemed when, statistically speaking, there should have been more variation in his measurements.

The case, which led to two scientific papers being retracted, came on the heels of an even bigger fraud, uncovered last year, perpetrated by the Dutch psychologist Diederik Stapel. He was found to have fabricated data for years and published it in at least 30 peer-reviewed papers, including a report in the journal Science about how untidy environments may encourage discrimination.

Friday, October 7, 2011

File drawer effect: Science studies neglecting negative results

By Dan Vergano
USA Today: Science Fair

Some scientific disciplines are reporting far fewer experiments that didn't work out than they did twenty years ago, suggests an analysis of the scientific literature.

In particular, economists, business school researchers and other social scientists, as well as some biomedical fields, appear increasingly susceptible to the "file-drawer" effect -- letting experiments that fail to prove an idea go unpublished -- suggests the Scientometrics journal study by Daniele Fanelli of Scotland's University of Edinburgh.

"Positive results in research studies overall, became 22% more likely to appear in scientific journals from 1990 to 2007," says the study, which looked at a sample of 4,656 papers over this time period, looking for trends in science journals.

"One of the most worrying distortions that scientific knowledge might endure is the loss of negative data. Results that do not confirm expectations—because they yield an effect that is either not statistically significant or just contradicts an hypothesis—are crucial to scientific progress, because this latter is only made possible by a collective self-correcting process. Yet, a lack of null and negative results has been noticed in innumerable fields. Their absence from the literature not only inflates effect size estimates in meta-analyses, thus exaggerating the importance of phenomena, but can also cause a waste of resources replicating research that has already failed, and might even create fields based on completely non-existent phenomena," says the analysis.
The analysis looked at studies where authors proposed a hypothesis and then sought to test it, either confirming it for a positive result, or not. Overall, 70.2% of papers were positive in 1990–1991 and 85.9% were positive in 2007. "On average, the odds or reporting a positive result have increased by around 6% every year, showing a statistically highly significant trend," says the study.

Japan produced the highest rate of positive results, followed by U.S. results. Some regions, such as Europe, and disciplines, such as Geosciences and Space Science, didn't show the positive-result increases:

"The average frequency of positive results was significantly higher when moving from the physical, to the biological to the social sciences, and in applied versus pure disciplines, all of which confirms previous findings. Space science had not only the lowest frequency of positive results overall, it was also the only discipline to show a slight decline in positive results over the years, together with Neuroscience & Behaviour," says the study.
Geosciences and Plant & Animal sciences showed a flat rate of positive vs. negative results since 1990.

The entire piece can be read here.