Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reproducibility. Show all posts
Showing posts with label Reproducibility. Show all posts

Friday, January 31, 2020

Most scientists 'can't replicate studies by their peers'

Test tubesTom Feilden
BBC.com
Originally posted 22 Feb 17

Here is an excerpt:

The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.

Sadly nothing, it seems, could be further from the truth.

After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.

Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.

"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.

Concern over the reliability of the results published in scientific literature has been growing for some time.

According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.

Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.

"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."

The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.

The info is here.

Friday, January 24, 2020

Psychology accused of ‘collective self-deception’ over results

Image result for psychology as scienceJack Grove
The Times Higher Education
Originally published 10 Dec 19

Here is an excerpt:

If psychologists are serious about doing research that could make “useful real-world predictions”, rather than conducting highly contextualised studies, they should use “much larger and more complex datasets, experimental designs and statistical models”, Dr Yarkoni advises.

He also suggests that the “sweeping claims” made by many papers bear little relation to their results, maintaining that a “huge proportion of the quantitative inferences drawn in the published psychology literature are so inductively weak as to be at best questionable and at worst utterly insensible”.

Many psychologists were indulging in a “collective self-deception” and should start “acknowledging the fundamentally qualitative nature of their work”, he says, stating that “a good deal of what currently passes for empirical psychology is already best understood as insightful qualitative analysis dressed up as shoddy quantitative science”.

That would mean no longer including “scientific-looking inferential statistics” within papers, whose appearance could be considered an “elaborate rhetorical ruse used to mathematicise people into believing claims they would otherwise find logically unsound”.

The info is here.

Monday, April 24, 2017

How Flawed Science Is Undermining Good Medicine

Morning Edition
NPR.org
Originally posted April 6, 2017

Here is an excerpt:

A surprising medical finding caught the eye of NPR's veteran science correspondent Richard Harris in 2014. A scientist from the drug company Amgen had reviewed the results of 53 studies that were originally thought to be highly promising — findings likely to lead to important new drugs. But when the Amgen scientist tried to replicate those promising results, in most cases he couldn't.

"He tried to reproduce them all," Harris tells Morning Edition host David Greene. "And of those 53, he found he could only reproduce six."

That was "a real eye-opener," says Harris, whose new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions explores the ways even some talented scientists go wrong — pushed by tight funding, competition and other constraints to move too quickly and sloppily to produce useful results.

"A lot of what everybody has reported about medical research in the last few years is actually wrong," Harris says. "It seemed right at the time but has not stood up to the test of time."

The impact of weak biomedical research can be especially devastating, Harris learned, as he talked to doctors and patients. And some prominent scientists he interviewed told him they agree that it's time to recognize the dysfunction in the system and fix it.

The article is here.

Saturday, March 14, 2015

What pushes scientists to lie? The disturbing but familiar story of Haruko Obokata

By John Rasko and Carl Power
The Guardian
Originally posted February 18, 2015

Here is an excerpt:

Two obvious reasons spring to mind. First, unbelievable carelessness. Obokata drew suspicion upon her Nature papers by the inept way she manipulated images and plagiarised text. It is often easy to spot such transgressions, and the top science journals are supposed to check for them; but it is also easy enough to hide them. Nature’s editors are scratching their heads wondering how they let themselves be fooled by Obokata’s clumsy tricks. However, we are more surprised that she didn’t try harder to cover her tracks, especially since her whole career was at stake.

Second, hubris. If Obokata hadn’t tried to be a world-beater, chances are her sleights of hand would have gone unnoticed and she would still be looking forward to a long and happy career in science. Experiments usually escape the test of reproducibility unless they prove something particularly important, controversial or commercialisable. Stap cells tick all three of these boxes. Because Obokata claimed such a revolutionary discovery, everyone wanted to know exactly how she had done it and how they could do it themselves. By stepping into the limelight, she exposed her work to greater scrutiny than it could bear.

The entire article is here.

Friday, July 11, 2014

Replication Crisis in Psychology Research Turns Ugly and Odd

By Tom Bartlett
The Chronicle of Higher Education
Originally published June 23, 2014

Another salvo was fired recently in what's become known...as "repligate."

In a blog post published last week, Timothy D. Wilson, a professor of psychology at the University of Virginia and the author of Redirect: The Surprising New Science of Psychological Change, declared that "the field has become preoccupied with prevention and error detection--negative psychology--at the expense of exploration and discovery."

The evidence that psychology is beset with false positives is weak, according to Mr. Wilson, and he pointed instead to the danger of inept replications that serve only to damage "the reputation of the original researcher and the progression of science."

While he called for finding common ground, Mr. Wilson pretty firmly sided with those who fear that psychology's growing replication movement, which aims to challenge what some critics see as a tsunami of suspicious science, is more destructive than corrective.

Still, Mr. Wilson was polite. Daniel Gilbert, less so.

The entire article is here.

Thursday, May 15, 2014

The Reformation: Can Social Scientists Save Themselves?

By Jerry Adler
Pacific Standard: The Science of Society
Originally posted April 28, 2014

Here is two excerpts to a long, yet exceptional, article on research in the social sciences:

OUTRIGHT FAKERY IS CLEARLY more common in psychology and other sciences than we’d like to believe. But it may not be the biggest threat to their credibility. As the journalist Michael Kinsley once said of wrongdoing in Washington, so too in the lab: “The scandal is what’s legal.” The kind of manipulation that went into the “When I’m Sixty-Four” paper, for instance, is “nearly universally common,” Simonsohn says. It is called “p-hacking,” or, more colorfully, “torturing the data until it confesses.”

P is a central concept in statistics: It’s the mathematical factor that mediates between what happens in the laboratory and what happens in the real world. The most common form of statistical analysis proceeds by a kind of backwards logic: Technically, the researcher is trying to disprove the “null hypothesis,” the assumption that the condition under investigation actually makes no difference.

(cut)

WHILE IT IS POSSIBLE to detect suspicious patterns in scientific data from a distance, the surest way to find out whether a study’s findings are sound is to do the study all over again. The idea that experiments should be replicable, producing the same results when run under the same conditions, was identified as a defining feature of science by Roger Bacon back in the 13th century. But the replication of previously published results has rarely been a high priority for scientists, who tend to regard it as grunt work. Journal editors yawn at replications. Honors and advancement in science go to those who publish new, startling results, not to those who confirm—or disconfirm—old ones.

The entire article is here.

Sunday, March 2, 2014

Scientific method: Statistical errors

P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.

By Regina Nuzzo
Nature
Originally published February 12, 2014

For a brief moment in 2010, Matt Motyl was on the brink of scientific glory: he had discovered that extremists quite literally see the world in black and white.

The results were “plain as day”, recalls Motyl, a psychology PhD student at the University of Virginia in Charlottesville. Data from a study of nearly 2,000 people seemed to show that political moderates saw shades of grey more accurately than did either left-wing or right-wing extremists. “The hypothesis was sexy,” he says, “and the data provided clear support.” The P value, a common index for the strength of evidence, was 0.01 — usually interpreted as 'very significant'. Publication in a high-impact journal seemed within Motyl's grasp.

But then reality intervened. Sensitive to controversies over reproducibility, Motyl and his adviser, Brian Nosek, decided to replicate the study. With extra data, the P value came out as 0.59 — not even close to the conventional level of significance, 0.05. The effect had disappeared, and with it, Motyl's dreams of youthful fame.

The entire article is here.

Monday, December 16, 2013

It's time for psychologists to put their house in order

By Keith Laws
The Guardian
Originally published February 27, 2013

Here is an excerpt:

Psychologists find significant statistical support for their hypotheses more frequently than any other science, and this is not a new phenomenon. More than 30 years ago, it was reported that psychology researchers are eight times as likely to submit manuscripts for publication when the results are positive rather than negative.

Unpublished, "failed" replications and negative findings stay in the file-drawer and therefore remain unknown to future investigators, who may independently replicate the null-finding (each also unpublished) - until by chance, a spuriously significant effect turns up.

It is this study that is published. Such findings typically emerge with large effect sizes (usually being tested with small samples), and then shrivel as time passes and replications fail to document the purported phenomenon. If the unreliability of the effect is eventually recognised, it occurs with little fanfare.

The entire story is here.