Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Peer Review. Show all posts
Showing posts with label Peer Review. Show all posts

Tuesday, January 2, 2024

Three Ways to Tell If Research Is Bunk

Arthur C. Brooks
The Atlantic
Originally posted 30 Nov 23

Here is an excerpt:

I follow three basic rules.

1. If it seems too good to be true, it probably is.

Over the past few years, three social scientists—Uri Simonsohn, Leif Nelson, and Joseph Simmons—have become famous for their sleuthing to uncover false or faked research results. To make the point that many apparently “legitimate” findings are untrustworthy, they tortured one particular data set until it showed the obviously impossible result that listening to the Beatles song “When I’m Sixty-Four” could literally make you younger.

So if a behavioral result is extremely unusual, I’m suspicious. If it is implausible or runs contrary to common sense, I steer clear of the finding entirely because the risk that it is false is too great. I like to subject behavioral science to what I call the “grandparent test”: Imagine describing the result to your worldly-wise older relative, and getting their response. (“Hey, Grandma, I found a cool new study showing that infidelity leads to happier marriages. What do you think?”)

2. Let ideas age a bit.

I tend to trust a sweet spot for how recent a particular research finding is. A study published more than 20 years ago is usually too old to reflect current social circumstances. But if a finding is too new, it may have so far escaped sufficient scrutiny—and been neither replicated nor shredded by other scholars. Occasionally, a brand-new paper strikes me as so well executed and sensible that it is worth citing to make a point, and I use it, but I am generally more comfortable with new-ish studies that are part of a broader pattern of results in an area I am studying. I keep a file (my “wine cellar”) of very recent studies that I trust but that I want to age a bit before using for a column.

3. Useful beats clever.

The perverse incentive is not limited to the academy alone. A lot of science journalism values novelty over utility, reporting on studies that turn out to be more likely to fail when someone tries to replicate them. As well as leading to confusion, this misunderstands the point of behavioral science, which is to provide not edutainment but insights that can improve well-being.

I rarely write a column because I find an interesting study. Instead, I come across an interesting topic or idea and write about that. Then I go looking for answers based on a variety of research and evidence. That gives me a bias—for useful studies over clever ones.

Beyond checking the methods, data, and design of studies, I feel that these three rules work pretty well in a world of imperfect research. In fact, they go beyond how I do my work; they actually help guide how I live.

In life, we’re constantly beset by fads and hacks—new ways to act and think and be, shortcuts to the things we want. Whether in politics, love, faith, or fitness, the equivalent of some hot new study with counterintuitive findings is always demanding that we throw out the old ways and accept the latest wisdom.


Here is my summary:

This article provides insights into identifying potentially unreliable or flawed research through three key indicators. Firstly, the author suggests scrutinizing the methodology, emphasizing the importance of a sound research design and data collection process. Research with vague or poorly explained methods may lack credibility. Secondly, the article highlights the significance of peer review and publication in reputable journals, serving as indicators of a study's reliability. Journals with rigorous peer-review processes contribute to the credibility of the research. Lastly, the author recommends assessing the source of funding for the research, as biased funding sources may influence study outcomes.

Wednesday, August 29, 2018

The ethics of computer science: this researcher has a controversial proposal

Elizabeth Gibney
www.nature.com
Originally published July 26, 2018

In the midst of growing public concern over artificial intelligence (AI), privacy and the use of data, Brent Hecht has a controversial proposal: the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers, or risk rejection.

Hecht, a computer scientist, chairs the Future of Computing Academy (FCA), a group of young leaders in the field that pitched the policy in March. Without such measures, he says, computer scientists will blindly develop products without considering their impacts, and the field risks joining oil and tobacco as industries whose researchers history judges unfavourably.

The FCA is part of the Association for Computing Machinery (ACM) in New York City, the world’s largest scientific-computing society. It, too, is making changes to encourage researchers to consider societal impacts: on 17 July, it published an updated version of its ethics code, last redrafted in 1992. The guidelines call on researchers to be alert to how their work can influence society, take steps to protect privacy and continually reassess technologies whose impact will change over time, such as those based in machine learning.

The rest is here.

Thursday, August 16, 2018

Peer Review is Not Scientific

E Price
medium.com
Originally published June 18, 2018

Here are two excerpts:

The first thing I want all lovers of science to know is this: peer-reviewers are not paid. When you are contacted by a journal editor and asked to conduct a review, there is no discussion of payment, because no payment is available. Ever. Furthermore, peer reviewing is not associated in any direct way with the actual job of being a professor or researcher. The person asking you to conduct a peer review is not your supervisor or the chair of your department, in nearly any circumstance. Your employer does not keep track of how many peer reviews you conduct and reward you appropriately.

Instead, you’re asked by journal editors, via email, on a voluntary basis. And it’s up to you, as a busy faculty member, graduate student, post-doc, or adjunct, to decide whether to say yes or not.

The process is typically anonymized, and tends to be relatively thankless — no one except the editor who has asked you to conduct the review will know that you were involved in the process. There is no quota of reviews a faculty member is expected to provide. Providing a review cannot really be placed on your resume or CV in any meaningful way.

(cut)

The level of scrutiny that an article is subjected to all comes down to chance. If you’re assigned a reviewer who created a theory that opposes your own theory, your work is likely to be picked apart. The reviewer will look very closely for flaws and take issue with everything that they can. This is not inherently a bad thing — research should be closely reviewed — but it’s not unbiased either.

The information is here.

Monday, October 30, 2017

Human Gene Editing Marches On

bioethics.net
Originally published October 6, 2017

Here is an excerpt:

In all three cases, the main biologic approach, and the main ethical issues, are the same.  The main differences were which genes were being edited, and how the embryos were obtained.

This prompted Nature to run an editorial to say that it is “time to take stock” of the ethics of this research.  Read the editorial here.  The key points:  This is important work that should be undertaken thoughtfully.  Accordingly, donors of any embryos or cells should be fully informed of the planned research.  Only as many embryos should be created as are necessary to do the research.  Work on embryos should be preceded by work on pluripotent, or “reprogrammed,” stem cells, and if questions can be fully answered by work with those cells, then it may not be necessary to repeat the studies on whole, intact human embryos, and if that is not necessary, perhaps it should not be done.  Finally, everything should be peer reviewed.

I agree that editing work in non-totipotent cells should be at all times favored over work on intact embryos, but if one holds that an embryo is a human being that should have the benefits of protections afforded human research subjects, then Nature’s ethical principles are rather thin, little more than an extension of animal use provisions for studies in which early humans are the raw materials for the development of new medical treatments.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Friday, January 6, 2017

‘Dear plagiarist’: A scientist calls out his double-crosser

By Adam Marcus and Ivan Oransky
STAT News
Originally published December 12, 2016

It’s a researcher’s worst nightmare: Pour five years, and at least 4,000 hours, of sweat and tears into a study, only to have the work stolen from you — by someone who was entrusted to confidentially review the manuscript.

But unlike many sordid tales of academia, this one is being made public. Dr. Michael Dansinger, of Tufts Medical Center, has taken to print to excoriate a group of researchers in Italy who stole his data and published it as their own.

Writing in the prestigious Annals of Internal Medicine — which unwittingly facilitated the episode by farming the paper out for review and then rejecting it — Dansinger calls out the scientists who published their nearly identical version in the somewhat less prestigious EXCLI Journal.

The article is here.

Friday, January 8, 2016

Peer-Review Fraud — Hacking the Scientific Publication Process

Charlotte J. Haug
N Engl J Med 373;25 nejm.org december 17, 2015

Here is an excerpt:

How is it possible to fake peer review? Moon, who studies medicinal plants, had set up a simple
procedure. He gave journals recommendations for peer reviewers for his manuscripts, providing
them with names and email addresses.  But these addresses were ones he created, so the requests
to review went directly to him or his colleagues. Not surprisingly, the editor would be sent favorable
reviews — sometimes within hours after the reviewing requests had been sent out. The fallout from Moon’s confession: 28 articles in various journals published by Informa were retracted, and one editor resigned.

Peter Chen, who was an engineer at Taiwan’s National Pingtung University of Education at the time, developed a more sophisticated scheme: he constructed a “peer review and citation ring” in which he used 130 bogus e-mail addresses and fabricated identities to generate fake reviews. An editor at one of the journals published by Sage Publications became suspicious, sparking a lengthy and comprehensive investigation, which resulted in the retraction of 60 articles in July 2014.

The article is here. 

Thursday, October 1, 2015

Peer review: a flawed process at the heart of science and journals

By Richard Smith
J R Soc Med. 2006 Apr; 99(4): 178–182.
doi:  10.1258/jrsm.99.4.178

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)

The entire article is here.

Monday, April 27, 2015

Science’s Big Scandal

Even legitimate publishers are faking peer review.

By Charles Seife
Slate.com
Originally published April 1, 2015

Here is an excerpt:

When something at the core of scientific publishing begins to rot, the smell of corruption quickly spreads to all areas of science. This is because the act of publishing a scientific finding is an essential part of the practice of science itself. You want a job? Tenure? A promotion? A juicy grant? You need to have a list of peer-reviewed publications, for publications are the coin of the scientific realm.

This coin has worth because of a long-standing social contract between scientists and publishers. Scientists hand over their work to a publication for free, and even sometimes pay a fee of several hundred to several thousand dollars for the privilege. What’s more, scientists often feel duty-bound to vet their colleagues’ work for little or no compensation when a publication asks them to. In return, the publications promise a thorough review process that establishes that a published article has some degree of scientific merit. Just like modern coinage, most of scholarly publications’ value resides in a stamp of approval from a trustworthy body.

The entire article is here.

Monday, October 20, 2014

Anonymous peer-review comments may spark legal battle

By Kelly Servick
Science Insider
Originally posted September 22, 2014

The power of anonymous comments—and the liability of those who make them—is at the heart of a possible legal battle embroiling PubPeer, an online forum launched in October 2012 for anonymous, post publication peer review. A researcher who claims that comments on PubPeer caused him to lose a tenured faculty job offer now intends to press legal charges against the person or people behind these posts—provided he can uncover their identities, his lawyer says.

The issue first came to light in August, when PubPeer’s (anonymous) moderators announced that the site had received a “legal threat.” Today, they revealed that the scientist involved is Fazlul Sarkar, a cancer researcher at Wayne State University in Detroit, Michigan. Sarkar, an author on more than 500 papers and principal investigator for more than $1,227,000 in active grants from the U.S. National Institutes of Health, has, like many scientists, had his work scrutinized on PubPeer. More than 50 papers on which he is an author have received at least one comment from PubPeer users, many of whom point out potential inconsistencies in the papers’ figures, such as perceived similarities between images that are supposed to depict different experiments.

The entire article is here.

Friday, June 20, 2014

Want to Change Academic Publishing? Just Say No

By Hugh Gusterson
The Chronicle of Higher Education
Originally published September 23, 2012

Here is an excerpt:

When I look at the work I do as an academic social scientist and the remuneration I receive, I see a pattern that makes little sense. This is especially the case with regard to publishing. If I review a book for a newspaper or evaluate a book for a university press, I get paid, but if I referee an article for a journal, I do not. If I publish a book, I get royalties. If I publish an opinion piece in the newspaper, I get a couple of hundred dollars. Once a magazine paid me $5,000 for an article.

But I get paid nothing directly for the most difficult, time-consuming writing I do: peer-reviewed academic articles. In fact a journal that owned the copyright to one of my articles made me pay $400 for permission to reprint my own writing in a book of my essays.

The entire article is here.

Tuesday, October 22, 2013

Who's Afraid of Peer Review?

By John Bohannon
Science 4 October 2013:
Vol. 342 no. 6154 pp. 60-65
DOI: 10.1126/science.342.6154.60

On 4 July, good news arrived in the inbox of Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara. It was the official letter of acceptance for a paper he had submitted 2 months earlier to the Journal of Natural Pharmaceuticals, describing the anticancer properties of a chemical that Cobange had extracted from a lichen.

In fact, it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

I know because I wrote the paper.

The entire story is here.

Thursday, October 25, 2012

7 More Cancer Scientists Quit Texas Institute Over Grants

By The Associated Press
Originally published on October 13, 2012

At least seven more scientists have resigned in protest from Texas’ embattled $3 billion cancer-fighting program, claiming that the agency in charge of it is charting a “politically driven” path that puts commercial interests before science.

The Cancer Prevention and Research Institute of Texas, created with the backing of Gov. Rick Perry and the cyclist Lance Armstrong, a cancer survivor, has awarded nearly $700 million in grants since 2009; only the National Institutes of Health offers a bigger pot of cancer-research money.

Tuesday, March 13, 2012

Will Patient Safety Initiatives Harm Physicians?

By Brian S. Kern, Esquire
Medscape Today News
Originally published on March 12, 2012

Peer review, the patient safety method designed to identify ineffective, unethical, or impaired physicians, can help improve the delivery of medical care, provide risk-management lessons, and lead to improved policies and procedures. At the same time, some doctors and hospital administrators have expressed concern that peer review produces fodder for civil or criminal lawsuits against physicians and healthcare institutions.

The body of law on patient safety initiatives and their level of confidentiality has evolved considerably. Historically, case law, recognizing the importance of peer-review procedures -- and the need to keep them confidential -- has protected self-critical analysis and other forms of internal investigation.

For example, in Christy v. Salem (2004), a New Jersey appellate court addressed whether a hospital's peer-review committee report was discoverable in a medical malpractice case. In declining to "adopt the privilege of self-critical analysis as a full privilege," the court chose to rely on a "case-by-case balancing approach" and essentially held that facts contained within a report are subject to legal discovery, but "evaluative and deliberative materials" are not.

Shortly thereafter, the Garden State adopted the New Jersey Patient Safety Act (NJ PSA), which in large part codified Christy v. Salem. The measure was tested early when, during the discovery phase of a medical malpractice trial against an obstetrician, a plaintiff's attorney sought hospital reports related to patient safety. The defense objected, asserting that the information was privileged and thus legally protected against disclosure.

The entire story is here.