Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 1, 2013

CLEANING UP SCIENCE

BY GARY MARCUS
The New Yorker
Originally published December 24, 2012


A lot of scientists have been busted recently for making up data and fudging statistics. One case involves a Harvard professor who I once knew and worked with; another a Dutch social psychologist who made up results by the bushel. Medicine, too, has seen a rash of scientific foul play; perhaps most notably, the dubious idea that vaccines could cause autism appears to have been a hoax perpetrated by a scientific cheat. A blog called RetractionWatch publishes depressing notices, almost daily. One recent post mentioned that a peer-review site had been hacked; others detail misconduct in dentistry, cancer research, and neuroscience. And that’s just in the last week.

Even if cases of scientific fraud and misconduct were simply ignored, my field (and several other fields of science, including medicine) would still be in turmoil. One recent examination of fifty-three medical studies found that further research was unable to replicate forty-seven of them. All too often, scientists muck about with pilot studies, and keep tweaking something until they get the result they were hoping to achieve. Unfortunately, each fresh effort increases the risk of getting the right result for the wrong reason, and winding up with a spurious vision of something that doesn’t turn out to be scientifically robust, like a cancer drug that seems to work in trials but fails to work in the real world
How on Earth are we going to do better? Here are six suggestions, drawn mainly from a just-published special issue of the journal Perspectives on Psychological Science. Two dozen articles offer valuable lessons not only for psychology, but for all consumers and producers of experimental science, from physics to neuroscience to medicine.

Restructure the incentives in science. For many reasons, science has become a race for the swift, but not necessarily the careful. Grants, tenure, and publishing all depend on flashy, surprising results. It is difficult to publish a study that merely replicates a predecessor, and it’s difficult to get tenure (or grants, or a first faculty jobs) without publications in elite journals. From the time a young scientist starts a Ph. D. to the time they’re up for tenure is typically thirteen years (or more), at the end of which the no-longer young apprentice might find him or herself out of a job. It is perhaps, in hindsight, no small wonder that some wind up cutting corners. Instead of, for example, rewarding scientists largely for the number of papers they publish—which credits quick, sloppy results that might not be reliable—we might reward scientists to a greater degree for producing solid, trustworthy research that other people are able to successfully replicate and then extend.

The entire article is here.