Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label p hacking. Show all posts
Showing posts with label p hacking. Show all posts

Monday, September 7, 2015

Science Isn’t Broken

By Christine Aschwanden
FivethirtyeightScience
Originally published August 19, 2015

If you follow the headlines, your confidence in science may have taken a hit lately.

Peer review? More like self-review. An investigation in November uncovered a scam in which researchers were rubber-stamping their own work, circumventing peer review at five high-profile publishers.

Scientific journals? Not exactly a badge of legitimacy, given that the International Journal of Advanced Computer Technology recently accepted for publication a paper titled “Get Me Off Your Fucking Mailing List,” whose text was nothing more than those seven words, repeated over and over for 10 pages. Two other journals allowed an engineer posing as Maggie Simpson and Edna Krabappel to publish a paper, “Fuzzy, Homogeneous Configurations.”

The entire article is here.

Friday, December 26, 2014

Science, Trust And Psychology In Crisis

By Tania Lombrozo
NPR
Originally published June 2, 2014

Here is an excerpt:

Researchers who engage in p-diligence are those who engage in practices — such as additional analyses or even experiments — designed to evaluate the robustness of their results, whether or not these practices make it into print. They might, for example, analyze their data with different exclusion criteria — not to choose the criterion that makes some effect most dramatic but to make sure that any claims in the paper don't depend on this potentially arbitrary decision. They might analyze the data using two statistical methods — not to choose the single one that yields a significant result but to make sure that they both do. They might build in checks for various types of human errors and analyze uninteresting aspects of the data to make sure there's nothing weird going on, like a bug in their code.

If these additional data or analyses reveal anything problematic, p-diligent researchers will temper their claims appropriately, or pursue further investigation as needed. And they'll engage in these practices with an eye toward avoiding potential pitfalls, such as confirmation bias and the seductions of p-hacking, that could lead to systematic errors. In other words, they'll "do their p-diligence" to make sure that they — and others — should invest in their claims.

P-hacking and p-diligence have something in common: Both involve practices that aren't fully reported in publication. As a consequence, they widen the gap. But let's face it: While the gap can (and sometimes should) be narrowed, it cannot be closed.

The entire article is here.

Thanks to Ed Zuckerman for this lead.

Thursday, May 15, 2014

The Reformation: Can Social Scientists Save Themselves?

By Jerry Adler
Pacific Standard: The Science of Society
Originally posted April 28, 2014

Here is two excerpts to a long, yet exceptional, article on research in the social sciences:

OUTRIGHT FAKERY IS CLEARLY more common in psychology and other sciences than we’d like to believe. But it may not be the biggest threat to their credibility. As the journalist Michael Kinsley once said of wrongdoing in Washington, so too in the lab: “The scandal is what’s legal.” The kind of manipulation that went into the “When I’m Sixty-Four” paper, for instance, is “nearly universally common,” Simonsohn says. It is called “p-hacking,” or, more colorfully, “torturing the data until it confesses.”

P is a central concept in statistics: It’s the mathematical factor that mediates between what happens in the laboratory and what happens in the real world. The most common form of statistical analysis proceeds by a kind of backwards logic: Technically, the researcher is trying to disprove the “null hypothesis,” the assumption that the condition under investigation actually makes no difference.

(cut)

WHILE IT IS POSSIBLE to detect suspicious patterns in scientific data from a distance, the surest way to find out whether a study’s findings are sound is to do the study all over again. The idea that experiments should be replicable, producing the same results when run under the same conditions, was identified as a defining feature of science by Roger Bacon back in the 13th century. But the replication of previously published results has rarely been a high priority for scientists, who tend to regard it as grunt work. Journal editors yawn at replications. Honors and advancement in science go to those who publish new, startling results, not to those who confirm—or disconfirm—old ones.

The entire article is here.