Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Publication Bias. Show all posts
Showing posts with label Publication Bias. Show all posts

Tuesday, January 26, 2021

Publish or Be Ethical? 2 Studies of Publishing Pressure & Scientific Misconduct in Research

Paruzel-Czachura M, Baran L, & Spendel Z. 
Research Ethics. December 2020. 

Abstract

The paper reports two studies exploring the relationship between scholars’ self-reported publication pressure and their self-reported scientific misconduct in research. In Study 1 the participants (N = 423) were scholars representing various disciplines from one big university in Poland. In Study 2 the participants (N = 31) were exclusively members of the management, such as dean, director, etc. from the same university. In Study 1 the most common reported form of scientific misconduct was honorary authorship. The majority of researchers (71%) reported that they had not violated ethical standards in the past; 3% admitted to scientific misconduct; 51% reported being were aware of colleagues’ scientific misconduct. A small positive correlation between perceived publication pressure and intention to engage in scientific misconduct in the future was found. In Study 2 more than half of the management (52%) reported being aware of researchers’ dishonest practices, the most frequent one of these being honorary authorship. As many as 71% of the participants report observing publication pressure in their subordinates. The primary conclusions are: (1) most scholars are convinced of their morality and predict that they will behave morally in the future; (2) scientific misconduct, particularly minor offenses such as honorary authorship, is frequently observed both by researchers (particularly in their colleagues) and by their managers; (3) researchers experiencing publication pressure report a willingness to engage in scientific misconduct in the future.

Conclusion

Our findings suggest that the notion of “publish or be ethical?” may constitute a real dilemma for the researchers. Although only 3% of our sample admitted to having engaged in scientific misconduct, 71% reported that they definitely had not violated ethical standards in the past. Furthermore, more than a half (51%) reported seeing scientific misconduct among their colleagues. We did not find a correlation between unsatisfactory work conditions and scientific misconduct, but we did find evidence to support the theory that perceived pressure to collect points is correlated with willingness to exceed ethical standards in the future.

Thursday, August 10, 2017

Predatory Journals Hit By ‘Star Wars’ Sting

By Neuroskeptic
discovermagazine.com
Originally published July 19, 2017

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.

Four journals fell for the sting. The American Journal of Medical and Biological Research (SciEP) accepted the paper, but asked for a $360 fee, which I didn’t pay. Amazingly, three other journals not only accepted but actually published the spoof. Here’s the paper from the International Journal of Molecular Biology: Open Access (MedCrave), Austin Journal of Pharmacology and Therapeutics (Austin) and American Research Journal of Biosciences (ARJ) I hadn’t expected this, as all those journals charge publication fees, but I never paid them a penny.

The blog post is here.

Friday, September 9, 2016

Additional Questions about the Applicability of “False Memory” Research

Kathryn Becker-Blease and Jennifer J. Freyd
Applied Cognitive Psychology, (2016)
DOI: 10.1002/acp.3266

Summary

Brewin and Andrews present a strong case that the results of studies on adults' false memories for childhood events yield small and variable effects of questionable practical significance. We discuss some fundamental limitations of the literature available for this review, highlighting key issues in the operationalization of the term 'false memory', publication bias, and additional variables that have been insufficiently researched. We discuss the implications of these findings in the real world. Ultimately, we conclude that more work is needed in all of these domains, and appreciate the efforts of these authors to further a careful and evidence-based discussion of the issues.

The article is here.

Sunday, March 6, 2016

The Unbearable Asymmetry of Bullshit

By Brian Earp
BMJ Blogs
Originally posted February 16, 2016

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

The blog post is here.

Monday, November 2, 2015

Many Antidepressant Studies Found Tainted by Pharma Company Influence

By Roni Jacobson
Scientific American
Originally published October 21, 2015

Here is an excerpt:

Almost 80 percent of meta-analyses in the review had some sort of industry tie, either through sponsorship, which the authors defined as direct industry funding of the study, or conflicts of interest, defined as any situation in which one or more authors were either industry employees or independent researchers receiving any type of industry support (including speaking fees and research grants). Especially troubling, the study showed about 7 percent of researchers had undisclosed conflicts of interest. “There’s a certain pecking order of papers,” says Erick Turner, a professor of psychiatry at Oregon Health & Science University who was not associated with the research. “Meta-analyses are at the top of the evidence pyramid.” Turner was “very concerned” by the results but did not find them surprising. “Industry influence is just massive. What’s really new is the level of attention people are now paying to it.”

The researchers considered all meta-analyses of randomized controlled trials for all approved antidepressants including selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, atypical antidepressants, monoamine oxidase inhibitors and others published between 2007 and March 2014.

The entire article is here.

Sunday, April 5, 2015

Compliance with Results Reporting at ClinicalTrials.gov

By Monique L. Anderson and others
N Engl J Med 2015; 372:1031-1039
March 12, 2015
DOI: 10.1056/NEJMsa1409364

Here are two excerpts:

The human experimentation that is conducted in clinical trials creates ethical obligations to make research findings publicly available. However, there are numerous historical examples of potentially harmful data being withheld from public scrutiny and selective publication of trial results. In 2000, Congress authorized the creation of the ClinicalTrials.gov registry to provide information about and access to clinical trials for persons with serious medical conditions. In 2007, Section 801 of the Food and Drug Administration Amendments Act (FDAAA) expanded this mandate by requiring sponsors of applicable clinical trials to register and report basic summary results at ClinicalTrials.gov. Such trials generally include all non–phase 1 interventional trials of drugs, medical devices, or biologics that were initiated after September 27, 2007, or before that date but that were still ongoing as of December 26, 2007, have at least one U.S. research site, or are conducted under an investigational-new-drug application or an investigational-device exemption. The FDAAA also mandates that trial results be reported by the sponsor within 1 year after the completion of data collection for the prespecified primary outcome (primary completion date) or within 1 year after the date of early termination, unless legally acceptable reasons for the delay are evident.

(cut)

In conclusion, despite ethical mandates, statutory obligations, and considerable societal pressure, most trials that were funded by the NIH or other government or academic institutions and were subject to FDAAA provisions have yet to report results at ClinicalTrials.gov, whereas the medical-products industry has been more responsive to the legal mandate of the FDAAA. However, industry, the NIH, and other government and academic institutions all performed poorly with respect to ethical obligations for transparency.

The entire article is here.

Monday, February 23, 2015

On making the right choice: A meta-analysis and large-scale replication attempt of the unconscious thought advantage

M.R. Nieuwenstein, T. Wirenga, R.D. Morey, J.M. Wichers, T.N. Blom, E. Wagenmakers, and H. vanRijn
Judgment and Decision Making, Vol. 10, No. 1, January 2015, pp. 1-17

Abstract

Are difficult decisions best made after a momentary diversion of thought? Previous research addressing this important question has yielded dozens of experiments in which participants were asked to choose the best of several options (e.g., cars or apartments) either after conscious deliberation, or after a momentary diversion of thought induced by an unrelated task. The results of these studies were mixed. Some found that participants who had first performed the unrelated task were more likely to choose the best option, whereas others found no evidence for this so-called unconscious thought advantage (UTA). The current study examined two accounts of this inconsistency in previous findings. According to the reliability account, the UTA does not exist and previous reports of this effect concern nothing but spurious effects obtained with an unreliable paradigm. In contrast, the moderator account proposes that the UTA is a real effect that occurs only when certain conditions are met in the choice task. To test these accounts, we conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the UTA. Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.

The entire article is here.

Wednesday, March 28, 2012

Publication Bias Mars Psychiatric Drug Literature

By John Gever, Senior Editor
MedPage Today
Originally Published March 23, 2012

Several negative studies of second-generation antipsychotic drugs were never published, leading to an exaggerated portrayal of the agents' effectiveness in the scientific literature, researchers said.

Of 24 registration trials involving eight products submitted to the FDA, four went unpublished in medical journals -- three of which found that the study drug's efficacy was equivalent to placebo or inferior to an active comparator, according to Erick H. Turner, MD, of Oregon Health and Science University in Portland, and colleagues.

Moreover, five of the 20 published trials "showed some evidence of outcome reporting bias," the researchers wrote online in PLoS Medicine.

Turner and colleagues also noted that the scale of the publication bias was relatively modest -- exaggerating the drugs' effectiveness relative to placebo or active comparators by a nonsignificant 8%. The weighted-average effect size in the published trials was 0.47 (95% CI 0.40 to 0.54), which declined only to 0.44 (95% CI 0.37 to 0.50) when the unpublished trials were included.


The research paper is below

Publication Bias in Antipsychotic Trials