Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Methodology. Show all posts
Showing posts with label Methodology. Show all posts

Tuesday, January 2, 2024

Three Ways to Tell If Research Is Bunk

Arthur C. Brooks
The Atlantic
Originally posted 30 Nov 23

Here is an excerpt:

I follow three basic rules.

1. If it seems too good to be true, it probably is.

Over the past few years, three social scientists—Uri Simonsohn, Leif Nelson, and Joseph Simmons—have become famous for their sleuthing to uncover false or faked research results. To make the point that many apparently “legitimate” findings are untrustworthy, they tortured one particular data set until it showed the obviously impossible result that listening to the Beatles song “When I’m Sixty-Four” could literally make you younger.

So if a behavioral result is extremely unusual, I’m suspicious. If it is implausible or runs contrary to common sense, I steer clear of the finding entirely because the risk that it is false is too great. I like to subject behavioral science to what I call the “grandparent test”: Imagine describing the result to your worldly-wise older relative, and getting their response. (“Hey, Grandma, I found a cool new study showing that infidelity leads to happier marriages. What do you think?”)

2. Let ideas age a bit.

I tend to trust a sweet spot for how recent a particular research finding is. A study published more than 20 years ago is usually too old to reflect current social circumstances. But if a finding is too new, it may have so far escaped sufficient scrutiny—and been neither replicated nor shredded by other scholars. Occasionally, a brand-new paper strikes me as so well executed and sensible that it is worth citing to make a point, and I use it, but I am generally more comfortable with new-ish studies that are part of a broader pattern of results in an area I am studying. I keep a file (my “wine cellar”) of very recent studies that I trust but that I want to age a bit before using for a column.

3. Useful beats clever.

The perverse incentive is not limited to the academy alone. A lot of science journalism values novelty over utility, reporting on studies that turn out to be more likely to fail when someone tries to replicate them. As well as leading to confusion, this misunderstands the point of behavioral science, which is to provide not edutainment but insights that can improve well-being.

I rarely write a column because I find an interesting study. Instead, I come across an interesting topic or idea and write about that. Then I go looking for answers based on a variety of research and evidence. That gives me a bias—for useful studies over clever ones.

Beyond checking the methods, data, and design of studies, I feel that these three rules work pretty well in a world of imperfect research. In fact, they go beyond how I do my work; they actually help guide how I live.

In life, we’re constantly beset by fads and hacks—new ways to act and think and be, shortcuts to the things we want. Whether in politics, love, faith, or fitness, the equivalent of some hot new study with counterintuitive findings is always demanding that we throw out the old ways and accept the latest wisdom.


Here is my summary:

This article provides insights into identifying potentially unreliable or flawed research through three key indicators. Firstly, the author suggests scrutinizing the methodology, emphasizing the importance of a sound research design and data collection process. Research with vague or poorly explained methods may lack credibility. Secondly, the article highlights the significance of peer review and publication in reputable journals, serving as indicators of a study's reliability. Journals with rigorous peer-review processes contribute to the credibility of the research. Lastly, the author recommends assessing the source of funding for the research, as biased funding sources may influence study outcomes.

Saturday, December 30, 2023

The ethics of doing human enhancement ethics

Rueda, J. (2023). 
Futures, 153, 103236.

Abstract

Human enhancement is one of the leading research topics in contemporary applied ethics. Interestingly, the widespread attention to the ethical aspects of future enhancement applications has generated misgivings. Are researchers who spend their time investigating the ethics of futuristic human enhancement scenarios acting in an ethically suboptimal manner? Are the methods they use to analyze future technological developments appropriate? Are institutions wasting resources by funding such research? In this article, I address the ethics of doing human enhancement ethics focusing on two main concerns. The Methodological Problem refers to the question of how we should methodologically address the moral aspects of future enhancement applications. The Normative Problem refers to what is the normative justification for investigating and funding the research on the ethical aspects of future human enhancement. This article aims to give a satisfactory response to both meta-questions in order to ethically justify the inquiry into the ethical aspects of emerging enhancement technologies.

Highlights

• Formulates second-order problems neglected in the literature on the ethics of future enhancement technologies.

• Discusses speculative ethics and anticipatory ethics methodologies for analyzing emerging enhancement innovations.

• Evaluates the main objections to engaging in research into the ethical aspects of future scenarios of human enhancement.

• Shows that methodological and normative meta-questions are key to advance the ethical debate on human enhancement.

Tuesday, May 7, 2019

Are Placebo-Controlled, Relapse Prevention Trials in Schizophrenia Research Still Necessary or Ethical?

Ryan E. Lawrence, Paul S. Appelbaum, Jeffrey A. Lieberman
JAMA Psychiatry. Published online April 10, 2019.
doi:10.1001/jamapsychiatry.2019.0275

Randomized, placebo-controlled trials have been the gold standard for evaluating the safety and efficacy of new psychotropic drugs for more than half a century. Although the US Food and Drug Administration (FDA) does not require placebo-controlled trial data to approve new drugs or marketing indications, they have become the industry standard for psychotropic drug development.

Placebos are controversial. The FDA guidelines state “when a new treatment is tested for a condition for which no effective treatment is known, there is usually no ethical problem with a study comparing the new treatment to placebo.”1 However, “in cases where an available treatment is known to prevent serious harm, such as death or irreversible morbidity, it is generally inappropriate to use a placebo control”. When new antipsychotics are developed for schizophrenia, it can be debated which guideline applies.

From the Conclusion:

We believe the time has come to cease the use of placebo in relapse prevention studies and encourage the use of active comparators that would protect patients from relapse and provide information on the comparative effectiveness of the drugs studied. We recommend that pharmaceutical companies not seek maintenance labeling if it would require placebo-controlled, relapse prevention trials. However, for putative antipsychotics with a novel mechanism of action, placebo-controlled, relapse prevention trials may still be justifiable.

The info is here.

Wednesday, December 5, 2018

Toward a psychology of Homo sapiens: Making psychological science more representative of the human population

Mostafa Salari Rad, Alison Jane Martingano, and Jeremy Ginges
PNAS November 6, 2018 115 (45) 11401-11405; published ahead of print November 6, 2018 https://doi.org/10.1073/pnas.1721165115

Abstract

Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability. This requires that we take into account the importance of culture and context in the way that we write our papers and in the types of populations that we sample. However, most research published in our leading journals has relied on sampling WEIRD (Western, educated, industrialized, rich, and democratic) populations. One might expect that our scholarly work and editorial choices would by now reflect the knowledge that Western populations may not be representative of humans generally with respect to any given psychological phenomenon. However, as we show here, almost all research published by one of our leading journals, Psychological Science, relies on Western samples and uses these data in an unreflective way to make inferences about humans in general. To take us forward, we offer a set of concrete proposals for authors, journal editors, and reviewers that may lead to a psychological science that is more representative of the human condition.

Thursday, February 23, 2017

How To Spot A Fake Science News Story

Alex Berezow
American Council on Science and Health
Originally published January 31, 2017

Here is an excerpt:

How to Detect a Fake Science News Story

Often, I have been asked, "How can you tell if a science story isn't legitimate?" Here are some red flags:

1) The article is very similar to the press release on which it was based. This indicates whether the article is science journalism or just public relations.

2) The article makes no attempt to explain methodology or avoids using any technical terminology. (This indicates the author may be incapable of understanding the original paper.)

3) The article does not indicate any limitations on the conclusions of the research. (For example, a study conducted entirely in mice cannot be used to draw firm conclusions about humans.)

4) The article treats established scientific facts and fringe ideas on equal terms.

5) The article is sensationalized; i.e., it draws huge, sweeping conclusions from a single study. (This is particularly common in stories on scary chemicals and miracle vegetables.)

6) The article fails to separate scientific evidence from science policy. Reasonable people should be able to agree on the former while debating the latter. (This arises from the fact that people ascribe to different values and priorities.)

The article is here.

Friday, January 20, 2017

Five Myths About the Role of Culture in Psychological Research

Qi Wang
Association of Psychological Science
Originally posted December 20, 2016

Here is an excerpt:

Twenty years of cultural research that my colleagues and I have done on the development of social cognition, including autobiographical memory, future thinking, the self, and emotion knowledge, illustrate how cultural psychological science can provide unique insights into psychological processes and further equip researchers with additional tools to understand human behavior.

There are five assumptions that often distract or discourage researchers from integrating cultural factors into their work, and I aim here to deconstruct them.

Assumption 1. Cultural Psychological Science Focuses Only on Finding Group Differences

This understanding of what cultural psychological science can do is far from being complete. In our research, my colleagues and I have learned how culturally prioritized self-goals guide autobiographical memory. Autonomous self-goals, prioritized in Western, particularly European American, cultures, motivate individuals to focus on and remember idiosyncratic details and subjective experiences that accentuate the individual. In contrast, relational self-goals like those prioritized in East Asian cultures motivate individuals to attend to and remember information about collective activities and significant others.

By experimentally manipulating self-goals of autonomy and relatedness, we are able to make European Americans recall socially oriented memories as East Asians usually do, and make East Asians recall self-focused memories as European Americans usually do. In a study I conducted with APS Fellow Michael A. Ross, University of Waterloo, Canada, we asked European American and Asian college students to describe themselves as either unique individuals (i.e., autonomous-self prime) or as members of social groups (i.e., relational-self prime). We then asked them to recall their earliest childhood memories. In both cultural groups, those whose autonomous self-goals were activated prior to the recall reported more self-focused memories, whereas those whose relational self-goals were made salient recalled more socially oriented memories.

The article is here.

Tuesday, November 1, 2016

The problem with p-values

David Colquhoun
aeon.co
Originally published October 11, 2016

Here is an excerpt:

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

Sunday, October 30, 2016

The ethics of animal research: a survey of the public and scientists in North America

Ari R. Joffe, Meredith Bara, Natalie Anton and Nathan Nobis
BMC Medical Ethics
BMC series – open, inclusive and trusted 2016

Background

To determine whether the public and scientists consider common arguments (and counterarguments) in support (or not) of animal research (AR) convincing.

Methods

After validation, the survey was sent to samples of public (Sampling Survey International (SSI; Canadian), Amazon Mechanical Turk (AMT; US), a Canadian city festival and children’s hospital), medical students (two second-year classes), and scientists (corresponding authors, and academic pediatricians). We presented questions about common arguments (with their counterarguments) to justify the moral permissibility (or not) of AR. Responses were compared using Chi-square with Bonferonni correction.

Results

There were 1220 public [SSI, n = 586; AMT, n = 439; Festival, n = 195; Hospital n = 107], 194/331 (59 %) medical student, and 19/319 (6 %) scientist [too few to report] responses. Most public respondents were <45 years (65 %), had some College/University education (83 %), and had never done AR (92 %). Most public and medical student respondents considered ‘benefits arguments’ sufficient to justify AR; however, most acknowledged that counterarguments suggesting alternative research methods may be available, or that it is unclear why the same ‘benefits arguments’ do not apply to using humans in research, significantly weakened ‘benefits arguments’. Almost all were not convinced of the moral permissibility of AR by ‘characteristics of non-human-animals arguments’, including that non-human-animals are not sentient, or are property. Most were not convinced of the moral permissibility of AR by ‘human exceptionalism’ arguments, including that humans have more advanced mental abilities, are of a special ‘kind’, can enter social contracts, or face a ‘lifeboat situation’. Counterarguments explained much of this, including that not all humans have these more advanced abilities [‘argument from species overlap’], and that the notion of ‘kind’ is arbitrary [e.g., why are we not of the ‘kind’ ‘sentient-animal’ or ‘subject-of-a-life’?]. Medical students were more supportive (80 %) of AR at the end of the survey (p < 0.05).

Conclusions

Responses suggest that support for AR may not be based on cogent philosophical rationales, and more open debate is warranted.

Wednesday, October 12, 2016

Utilitarian preferences or action preferences? De-confounding action and moral code in sacrificial dilemmas

Damien L. Crone & Simon M. Laham
Personality and Individual Differences, Volume 104, January 2017, Pages 476-481

Abstract

A large literature in moral psychology investigates utilitarian versus deontological moral preferences using sacrificial dilemmas (e.g., the Trolley Problem) in which one can endorse harming one person for the greater good. The validity of sacrificial dilemma responses as indicators of one's preferred moral code is a neglected topic of study. One underexplored cause for concern is that standard sacrificial dilemmas confound the endorsement of specific moral codes with the endorsement of action such that endorsing utilitarianism always requires endorsing action. Two studies show that, after de-confounding these factors, the tendency to endorse action appears about as predictive of sacrificial dilemma responses as one's preference for a particular moral code, suggesting that, as commonly used, sacrificial dilemma responses are poor indicators of moral preferences. Interestingly however, de-confounding action and moral code may provide a more valid means of inferring one's preferred moral code.

The article is here.

Monday, September 7, 2015

How to Know Whether to Believe a Health Study

By Austin Frakt
The New York Times - The Upshot
Originally posted on August 17, 2015

Here is an excerpt:

Unfortunately, there’s no substitute for careful examination of studies by experts. Yet, if you’re not an expert, you can do a few simple things to become a more savvy consumer of research. First, if the study examined the effects of a therapy only on animals or in a test tube, we have very limited insight into how it will actually work in humans. You should take any claims about effects on people with more than a grain of salt. Next, for studies involving humans, ask yourself: What method did the researchers use? How similar am I to the people it examined?

Sure, there are many other important questions to ask about a study — for instance, did it examine harms as well as benefits? But just assessing the basis for what researchers call “causal claims” — X leads to or causes Y — and how similar you are to study subjects will go a long way toward unlocking its credibility and relevance to you.

The entire article is here.

Friday, December 26, 2014

Science, Trust And Psychology In Crisis

By Tania Lombrozo
NPR
Originally published June 2, 2014

Here is an excerpt:

Researchers who engage in p-diligence are those who engage in practices — such as additional analyses or even experiments — designed to evaluate the robustness of their results, whether or not these practices make it into print. They might, for example, analyze their data with different exclusion criteria — not to choose the criterion that makes some effect most dramatic but to make sure that any claims in the paper don't depend on this potentially arbitrary decision. They might analyze the data using two statistical methods — not to choose the single one that yields a significant result but to make sure that they both do. They might build in checks for various types of human errors and analyze uninteresting aspects of the data to make sure there's nothing weird going on, like a bug in their code.

If these additional data or analyses reveal anything problematic, p-diligent researchers will temper their claims appropriately, or pursue further investigation as needed. And they'll engage in these practices with an eye toward avoiding potential pitfalls, such as confirmation bias and the seductions of p-hacking, that could lead to systematic errors. In other words, they'll "do their p-diligence" to make sure that they — and others — should invest in their claims.

P-hacking and p-diligence have something in common: Both involve practices that aren't fully reported in publication. As a consequence, they widen the gap. But let's face it: While the gap can (and sometimes should) be narrowed, it cannot be closed.

The entire article is here.

Thanks to Ed Zuckerman for this lead.

Saturday, October 25, 2014

Advances in Experimental Moral Psychology

Hagop Sarkissian and Jennifer Cole Wright (eds.), Advances in Experimental Moral Psychology, Bloomsbury, 2014, 256pp., $112.00 (hbk), ISBN 9781472509383.

Reviewed by Jesse S. Summers, Duke University
Notre Dame Philosophical Reviews

The distinction between moral psychology and moral philosophy has never been a clear one. Observations about what humans are like plays an indispensable role in understanding our moral obligations and virtues, and great swaths of moral philosophy until the 19th century are psychology avant la lettre, empirical speculations about how we form moral judgments, about mental faculties and rationality, pleasure, pain, and character. This relationship between philosophy and psychology becomes both opaque and strained once experimental psychology develops its own academic discipline. Nevertheless, many contemporary moral debates -- like those surrounding moral character and moral motivation -- are clearly aware of and are sometimes in response to findings of empirical psychology. Experimental psychology is again leaching into the philosophical water.

It is a credit to Hagop Sarkissian and Jennifer Cole Wright that the research they assembled adds further nutrients to the soil. This collection is not a survey of empirical moral psychology. It instead pushes into debates whose philosophical implications have yet to be widely considered. As a result, the collection's primary audience is anyone already interested in moral psychology understood broadly, and it will need no further recommendation to those with empirical interests in the topic.

The entire book review is here.

Monday, December 16, 2013

It's time for psychologists to put their house in order

By Keith Laws
The Guardian
Originally published February 27, 2013

Here is an excerpt:

Psychologists find significant statistical support for their hypotheses more frequently than any other science, and this is not a new phenomenon. More than 30 years ago, it was reported that psychology researchers are eight times as likely to submit manuscripts for publication when the results are positive rather than negative.

Unpublished, "failed" replications and negative findings stay in the file-drawer and therefore remain unknown to future investigators, who may independently replicate the null-finding (each also unpublished) - until by chance, a spuriously significant effect turns up.

It is this study that is published. Such findings typically emerge with large effect sizes (usually being tested with small samples), and then shrivel as time passes and replications fail to document the purported phenomenon. If the unreliability of the effect is eventually recognised, it occurs with little fanfare.

The entire story is here.

Monday, December 2, 2013

The Pervasive Problem With Placebos in Psychology

By Walter R. Boot, Daniel J. Simons, Cary Stothart, and Cassie Stutts
doi: 10.1177/1745691613491271
Perspectives on Psychological Science July 2013 vol. 8 no. 4 445-454

Abstract

To draw causal conclusions about the efficacy of a psychological intervention, researchers must compare the treatment condition with a control group that accounts for improvements caused by factors other than the treatment. Using an active control helps to control for the possibility that improvement by the experimental group resulted from a placebo effect. Although active control groups are superior to “no-contact” controls, only when the active control group has the same expectation of improvement as the experimental group can we attribute differential improvements to the potency of the treatment. Despite the need to match expectations between treatment and control groups, almost no psychological interventions do so. This failure to control for expectations is not a minor omission—it is a fundamental design flaw that potentially undermines any causal inference. We illustrate these principles with a detailed example from the video-game-training literature showing how the use of an active control group does not eliminate expectation differences. The problem permeates other interventions as well, including those targeting mental health, cognition, and educational achievement. Fortunately, measuring expectations and adopting alternative experimental designs makes it possible to control for placebo effects, thereby increasing confidence in the causal efficacy of psychological interventions.

The entire article is here.

Tuesday, August 13, 2013

Social brains on drugs: tools for neuromodulation in social neuroscience

Molly J. Crockett & Ernst Fehr
Soc Cogn Affect Neurosci (2013)
doi: 10.1093/scan/nst113
First published online: July 24, 2013

Abstract

Neuromodulators such as serotonin, oxytocin, and testosterone play an important role in social behavior. Studies examining the effects of these neuromodulators and others on social cognition and behavior, and their neural underpinnings, are becoming increasingly common. Here, we provide an
overview of methodological considerations for those wishing to evaluate or conduct empirical studies of neuromodulation in social neuroscience.

The entire research article is here.

Thanks to Molly Crockett for making this available.

Friday, July 26, 2013

Low Hopes, High Expectations: Expectancy Effects and the Replicability of Behavioral Experiments

By Olivier Klein and others
Perspectives on Psychological Science 7(6) 572–584
DOI: 10.1177/1745691612463704
http://pps.sagepub.com

This article revisits two classical issues in experimental methodology: experimenter bias and demand characteristics. We report a content analysis of the method section of experiments reported in two psychology journals (Psychological Science and the Journal of Personality and Social Psychology), focusing on aspects of the procedure associated with these two phenomena, such as mention of the presence of the experimenter, suspicion probing, and handling of deception. We note that such information is very often absent, which prevents observers from gauging the extent to which such factors influence the results. We consider the reasons that may explain this omission, including the automatization of psychology experiments, the evolution of research topics, and, most important, a view of research participants as passive receptacles of stimuli. Using a situated social cognition perspective, we emphasize the importance of integrating the social context of experiments in the explanation of psychological phenomena. We illustrate this argument via a controversy on stereotype-based behavioral
priming effects.

The entire article is here.

Sunday, May 13, 2012

Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling

*Psychological Science* has scheduled an article for publication in a future issue of the journal: "Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling."

The authors are Leslie K. John of Harvard University, George Loewenstein of Carnegie Mellon University, & Drazen Prelec of the Massachusetts Institute of Technology.

Here is the abstract:
Cases of clear scientific misconduct have received significant media attention recently, but less flagrantly questionable research practices may be more prevalent and, ultimately, more damaging to the academic enterprise. Using an anonymous elicitation format supplemented by incentives for honest reporting, we surveyed over 2,000 psychologists about their involvement in questionable research practices. The impact of truth-telling incentives on self-admissions of questionable research practices was positive, and this impact was greater for practices that respondents judged to be less defensible. Combining three different estimation methods, we found that the percentage of respondents who have engaged in questionable practices was surprisingly high. This finding suggests that some questionable practices may constitute the prevailing research norm.
Here's how the article starts:

Although cases of overt scientific misconduct have received significant media attention recently (Altman, 2006; Deer, 2011; Steneck, 2002, 2006), exploitation of the gray area of acceptable practice is certainly much more prevalent, and may be more damaging to the academic enterprise in the long run, than outright fraud.

Questionable research practices (QRPs), such as excluding data points on the basis of post hoc criteria, can spuriously increase the likelihood of finding evidence in support of a hypothesis.

Just how dramatic these effects can be was demonstrated by Simmons, Nelson, and Simonsohn (2011) in a series of experiments and simulations that showed how greatly QRPs increase the likelihood of finding support for a false hypothesis.

QRPs are the steroids of scientific competition, artificially enhancing performance and producing a kind of arms race in which researchers who strictly play by the rules are at a competitive disadvantage.

QRPs, by nature of the very fact that they are often questionable as opposed to blatantly improper, also offer considerable latitude for rationalization and self-deception.

Concerns over QRPs have been mounting (Crocker, 2011; Lacetera & Zirulia, 2011; Marshall, 2000; Sovacool, 2008; Sterba, 2006; Wicherts, 2011), and several studies--many of which have focused on medical research--have assessed their prevalence (Gardner, Lidz, & Hartwig, 2005; Geggie, 2001; Henry et al., 2005; List, Bailey, Euzent, & Martin, 2001; Martinson, Anderson, & de Vries, 2005; Swazey, Anderson, & Louis, 1993).

In the study reported here, we measured the percentage of psychologists who have engaged in QRPs.

As with any unethical or socially stigmatized behavior, self-reported survey data are likely to underrepresent true prevalence.

(cut)

The study "surveyed over 2,000 psychologists about their involvement in questionable research practices."

The article reports that the findings "point to the same conclusion: A surprisingly high percentage of psychologists admit to having engaged in QRPs."

(cut)

Most of the respondents in our study believed in the integrity of their own research and judged practices they had engaged in to be acceptable.

However, given publication pressures and professional ambitions, the inherent ambiguity of the defensibility of "questionable" research practices, and the well-documented ubiquity of motivated reasoning (Kunda, 1990), researchers may not be in the best position to judge the defensibility of their own behavior.

This could in part explain why the most egregious practices in our survey (e.g., falsifying data) appear to be less common than the relatively less questionable ones (e.g., failing to report all of a study's conditions).

It is easier to generate a post hoc explanation to justify removing nuisance data points than it is to justify outright data falsification, even though both practices produce similar consequences.

(cut)

Another excerpt: "Given the findings of our study, it comes as no surprise that many researchers have expressed concerns over failures to replicate published results (Bower & Mayer, 1985; Crabbe, Wahlsten, & Dudek, 1999; Doyen, Klein, Pichon, & Cleeremans, 2012, Enserink, 1999; Galak, LeBoeuf, Nelson, & Simmons, 2012; Ioannidis, 2005a, 2005b; Palmer, 2000; Steele, Bass, & Crook, 1999)."

(cut)

More generally, the prevalence of QRPs raises questions about the credibility of research findings and threatens research integrity by producing unrealistically elegant results that may be difficult to match without engaging in such practices oneself.

This can lead to a "race to the bottom," with questionable research begetting even more questionable research.

----------------------
Thanks to Ken Pope for this information.

The abstract and article are here.