Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 2, 2024

Three Ways to Tell If Research Is Bunk

Arthur C. Brooks
The Atlantic
Originally posted 30 Nov 23

Here is an excerpt:

I follow three basic rules.

1. If it seems too good to be true, it probably is.

Over the past few years, three social scientists—Uri Simonsohn, Leif Nelson, and Joseph Simmons—have become famous for their sleuthing to uncover false or faked research results. To make the point that many apparently “legitimate” findings are untrustworthy, they tortured one particular data set until it showed the obviously impossible result that listening to the Beatles song “When I’m Sixty-Four” could literally make you younger.

So if a behavioral result is extremely unusual, I’m suspicious. If it is implausible or runs contrary to common sense, I steer clear of the finding entirely because the risk that it is false is too great. I like to subject behavioral science to what I call the “grandparent test”: Imagine describing the result to your worldly-wise older relative, and getting their response. (“Hey, Grandma, I found a cool new study showing that infidelity leads to happier marriages. What do you think?”)

2. Let ideas age a bit.

I tend to trust a sweet spot for how recent a particular research finding is. A study published more than 20 years ago is usually too old to reflect current social circumstances. But if a finding is too new, it may have so far escaped sufficient scrutiny—and been neither replicated nor shredded by other scholars. Occasionally, a brand-new paper strikes me as so well executed and sensible that it is worth citing to make a point, and I use it, but I am generally more comfortable with new-ish studies that are part of a broader pattern of results in an area I am studying. I keep a file (my “wine cellar”) of very recent studies that I trust but that I want to age a bit before using for a column.

3. Useful beats clever.

The perverse incentive is not limited to the academy alone. A lot of science journalism values novelty over utility, reporting on studies that turn out to be more likely to fail when someone tries to replicate them. As well as leading to confusion, this misunderstands the point of behavioral science, which is to provide not edutainment but insights that can improve well-being.

I rarely write a column because I find an interesting study. Instead, I come across an interesting topic or idea and write about that. Then I go looking for answers based on a variety of research and evidence. That gives me a bias—for useful studies over clever ones.

Beyond checking the methods, data, and design of studies, I feel that these three rules work pretty well in a world of imperfect research. In fact, they go beyond how I do my work; they actually help guide how I live.

In life, we’re constantly beset by fads and hacks—new ways to act and think and be, shortcuts to the things we want. Whether in politics, love, faith, or fitness, the equivalent of some hot new study with counterintuitive findings is always demanding that we throw out the old ways and accept the latest wisdom.


Here is my summary:

This article provides insights into identifying potentially unreliable or flawed research through three key indicators. Firstly, the author suggests scrutinizing the methodology, emphasizing the importance of a sound research design and data collection process. Research with vague or poorly explained methods may lack credibility. Secondly, the article highlights the significance of peer review and publication in reputable journals, serving as indicators of a study's reliability. Journals with rigorous peer-review processes contribute to the credibility of the research. Lastly, the author recommends assessing the source of funding for the research, as biased funding sources may influence study outcomes.