Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Scientific Misconduct. Show all posts
Showing posts with label Scientific Misconduct. Show all posts

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Sunday, June 25, 2023

Harvard Business School Professor Francesca Gino Accused of Committing Data Fraud

Rahem D. Hamid
Crimson Staff Writer
Originally published 24 June 23

Here is an excerpt:

But in a post on June 17, Data Colada wrote that they found evidence of additional data fabrication in that study in a separate experiment that Gino was responsible for.

Harvard has also been internally investigating “a series of papers” for more than a year, according to the Chronicle of Higher Education. Data Colada wrote last week that the University’s internal report may be around 1,200 pages.

The professors added that Harvard has requested that three other papers co-authored by Gino — which Data Colada flagged — also be retracted and that the 2012 paper’s retraction be amended to include Gino’s fabrications.

Last week, Bazerman told the Chronicle of Higher Education that he was informed by Harvard that the experiments he co-authored contained additional fraudulent data.

Bazerman called the evidence presented to him by the University “compelling,” but he denied to the Chronicle that he was at all involved with the data manipulation.

According to Data Colada, Gino was “the only author involved in the data collection and analysis” of the experiment in question.

“To the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in question,” the professors wrote.

In their second post on Tuesday, the investigators wrote that a 2015 study co-authored by Gino also contains manipulations to prove the paper’s hypothesis.

Observations in the paper, the three wrote, “were altered to produce the desired effect.”

“And if these observations were altered, then it is reasonable to suspect that other observations were altered as well,” they added.


Science is a part of a healthy society:
  • Scientific research relies on the integrity of the researchers. When researchers fabricate or falsify data, they undermine the trust that is necessary for scientific progress.
  • Data fraud can have serious consequences. It can lead to the publication of false or misleading findings, which can have a negative impact on public policy, business decisions, and other areas.

Wednesday, July 27, 2022

Blots on a Field? (A modern story of unethical research related to Alzheimer's)

Charles Pillar
Science Magazine
Originally posted 21 JUL 22

Here is an excerpt:

A 6-month investigation by Science provided strong support for Schrag’s suspicions and raised questions about Lesné’s research. A leading independent image analyst and several top Alzheimer’s researchers—including George Perry of the University of Texas, San Antonio, and John Forsayeth of the University of California, San Francisco (UCSF)—reviewed most of Schrag’s findings at Science’s request. They concurred with his overall conclusions, which cast doubt on hundreds of images, including more than 70 in Lesné’s papers. Some look like “shockingly blatant” examples of image tampering, says Donna Wilcock, an Alzheimer’s expert at the University of Kentucky.

The authors “appeared to have composed figures by piecing together parts of photos from different experiments,” says Elisabeth Bik, a molecular biologist and well-known forensic image consultant. “The obtained experimental results might not have been the desired results, and that data might have been changed to … better fit a hypothesis.”

Early this year, Schrag raised his doubts with NIH and journals including Nature; two, including Nature last week, have published expressions of concern about papers by Lesné. Schrag’s work, done independently of Vanderbilt and its medical center, implies millions of federal dollars may have been misspent on the research—and much more on related efforts. Some Alzheimer’s experts now suspect Lesné’s studies have misdirected Alzheimer’s research for 16 years.

“The immediate, obvious damage is wasted NIH funding and wasted thinking in the field because people are using these results as a starting point for their own experiments,” says Stanford University neuroscientist Thomas Südhof, a Nobel laureate and expert on Alzheimer’s and related conditions.

Lesné did not respond to requests for comment. A UMN spokesperson says the university is reviewing complaints about his work.

To Schrag, the two disputed threads of Aβ research raise far-reaching questions about scientific integrity in the struggle to understand and cure Alzheimer’s. Some adherents of the amyloid hypothesis are too uncritical of work that seems to support it, he says. “Even if misconduct is rare, false ideas inserted into key nodes in our body of scientific knowledge can warp our understanding.”

(cut)

The paper provided an “important boost” to the amyloid and toxic oligomer hypotheses when they faced rising doubts, Südhof says. “Proponents loved it, because it seemed to be an independent validation of what they have been proposing for a long time.”

“That was a really big finding that kind of turned the field on its head,” partly because of Ashe’s impeccable imprimatur, Wilcock says. “It drove a lot of other investigators to … go looking for these [heavier] oligomer species.”

As Ashe’s star burned more brightly, Lesné’s rose. He joined UMN with his own NIH-funded lab in 2009. Aβ*56 remained a primary research focus. Megan Larson, who worked as a junior scientist for Lesné and is now a product manager at Bio-Techne, a biosciences supply company, calls him passionate, hardworking, and charismatic. She and others in the lab often ran experiments and produced Western blots, Larson says, but in their papers together, Lesné prepared all the images for publication.

Tuesday, January 26, 2021

Publish or Be Ethical? 2 Studies of Publishing Pressure & Scientific Misconduct in Research

Paruzel-Czachura M, Baran L, & Spendel Z. 
Research Ethics. December 2020. 

Abstract

The paper reports two studies exploring the relationship between scholars’ self-reported publication pressure and their self-reported scientific misconduct in research. In Study 1 the participants (N = 423) were scholars representing various disciplines from one big university in Poland. In Study 2 the participants (N = 31) were exclusively members of the management, such as dean, director, etc. from the same university. In Study 1 the most common reported form of scientific misconduct was honorary authorship. The majority of researchers (71%) reported that they had not violated ethical standards in the past; 3% admitted to scientific misconduct; 51% reported being were aware of colleagues’ scientific misconduct. A small positive correlation between perceived publication pressure and intention to engage in scientific misconduct in the future was found. In Study 2 more than half of the management (52%) reported being aware of researchers’ dishonest practices, the most frequent one of these being honorary authorship. As many as 71% of the participants report observing publication pressure in their subordinates. The primary conclusions are: (1) most scholars are convinced of their morality and predict that they will behave morally in the future; (2) scientific misconduct, particularly minor offenses such as honorary authorship, is frequently observed both by researchers (particularly in their colleagues) and by their managers; (3) researchers experiencing publication pressure report a willingness to engage in scientific misconduct in the future.

Conclusion

Our findings suggest that the notion of “publish or be ethical?” may constitute a real dilemma for the researchers. Although only 3% of our sample admitted to having engaged in scientific misconduct, 71% reported that they definitely had not violated ethical standards in the past. Furthermore, more than a half (51%) reported seeing scientific misconduct among their colleagues. We did not find a correlation between unsatisfactory work conditions and scientific misconduct, but we did find evidence to support the theory that perceived pressure to collect points is correlated with willingness to exceed ethical standards in the future.

Saturday, November 23, 2019

Is this “one of the worst scientific scandals of all time”?

Hans Eysenck
Stephen Fleischfresser
cosmosmagazine.com
Originally posted 21 October 2019

Here is an excerpt:

Another study on the efficacy of psychotherapy in preventing cancer showed 100% of treated subjects did not die of cancer in the following 13 years, compared to 32% of an untreated control group.

Perhaps most alarming results were connected to Eysenck and Grossath-Maticek’s notion of ‘bibliotherapy’ which consisted of, as Eysenck put it, “a written pamphlet outlining the principles of behaviour therapy as applied to better, more autonomous living, and avoidance of stress.”

This was coupled with five hours of discussion, aimed both at reorienting a patient’s personality away from the cancer-prone and toward a healthier disposition. The results of this study, according to Pelosi, were that “128 of the 600 (21%) controls died of cancer over 13 years compared with 27 of 600 (4.5%) treated subjects.

"Such results are otherwise unheard of in the entire history of medical science.” There were similarly spectacular results concerning various forms of heart disease too.

These decidedly improbable findings led to a blizzard of critical scrutiny through the 90s: Eysenck and Grossath-Maticek’s work was attacked for its methodology, statistical treatment and ethics.

One researcher who attempted a sympathetic review of the work, in cooperation with the pair, found, says Pelosi, “unequivocal evidence of manipulation of data sheets,” from the Heidelberg cohort, as well as numerous patient questionnaires with identical responses.

An attempt at replicating some of their results concerning heart disease provided cold comfort, indicating that the personality type association with coronary illness was non-existent for all but one of the types.

A slightly modified replication of Eysenck and Grossath-Maticek’s research on personality and cancer faired no better, with the author, Manfred Amelang, writing “I know of no other area of research in which the change from an interview to a carefully constructed questionnaire measuring the same construct leads to a change from near-perfect prediction to near-zero prediction.”

The info is here.

Friday, May 17, 2019

Scientific Misconduct in Psychology: A Systematic Review of Prevalence Estimates and New Empirical Data

Johannes Stricker & Armin Günther
Zeitschrift fur Psychologie
Published online: March 29, 2019

Abstract

Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other disciplines, there was no reliable lower bound prevalence estimate of scientific misconduct based on identified cases available for psychology. Thus, we conducted an additional empirical investigation on the basis of retractions in the database PsycINFO. Our analyses showed that 0.82 per 10,000 journal articles in psychology were retracted due to scientific misconduct. Between the late 1990s and 2012, there was a steep increase. Articles retracted due to scientific misconduct were identified in 20 out of 22 PsycINFO subfields. These results show that measures aiming to reduce scientific misconduct should be promoted equally across all psychological subfields.

The research is here.


Monday, March 27, 2017

US Researchers Found Guilty of Misconduct Collectively Awarded $101 Million

Joshua A. Krisch
The Scientist
February 27, 2017

Researchers found guilty of scientific misconduct by the US Department of Health and Human Services (HHS) went on to collectively receive $101 million from the National Institutes of Health (NIH), according to a study published this month (February 1) in the Journal of Empirical Research on Human Research Ethics. The authors also found that 47.2 percent of the researchers found guilty of misconduct they examined continue to publish studies.

The article is here.

The research is here.

Tuesday, November 25, 2014

Fabricating and plagiarising: when researchers lie

By Mark Israel
The Conversation
Originally published November 5, 2014

Here is an excerpt:

Systematic research into the causes of scientific misconduct is scarce. However, occasionally committees of investigation and research organisations have offered some comment. Some see the researcher as a “bad apple”. A researcher’s own ambition, vanity, desire for recognition and fame, and the prospect for personal gain may lead to behaviour that crosses the limits of what is admissible. Others point to the culture that may prevail in certain disciplines or research groups (“bad barrel”).

Again others identify the creation of a research environment overwhelmed by corrupting pressures (“bad barrel maker”). Many academics are under increasing pressure to publish – and to do so in English irrespective of their competence in that language – as their nation or institution seeks to establish or defend its placing in international research rankings.

The entire article is here.

Tuesday, January 1, 2013

CLEANING UP SCIENCE

BY GARY MARCUS
The New Yorker
Originally published December 24, 2012


A lot of scientists have been busted recently for making up data and fudging statistics. One case involves a Harvard professor who I once knew and worked with; another a Dutch social psychologist who made up results by the bushel. Medicine, too, has seen a rash of scientific foul play; perhaps most notably, the dubious idea that vaccines could cause autism appears to have been a hoax perpetrated by a scientific cheat. A blog called RetractionWatch publishes depressing notices, almost daily. One recent post mentioned that a peer-review site had been hacked; others detail misconduct in dentistry, cancer research, and neuroscience. And that’s just in the last week.

Even if cases of scientific fraud and misconduct were simply ignored, my field (and several other fields of science, including medicine) would still be in turmoil. One recent examination of fifty-three medical studies found that further research was unable to replicate forty-seven of them. All too often, scientists muck about with pilot studies, and keep tweaking something until they get the result they were hoping to achieve. Unfortunately, each fresh effort increases the risk of getting the right result for the wrong reason, and winding up with a spurious vision of something that doesn’t turn out to be scientifically robust, like a cancer drug that seems to work in trials but fails to work in the real world
How on Earth are we going to do better? Here are six suggestions, drawn mainly from a just-published special issue of the journal Perspectives on Psychological Science. Two dozen articles offer valuable lessons not only for psychology, but for all consumers and producers of experimental science, from physics to neuroscience to medicine.

Restructure the incentives in science. For many reasons, science has become a race for the swift, but not necessarily the careful. Grants, tenure, and publishing all depend on flashy, surprising results. It is difficult to publish a study that merely replicates a predecessor, and it’s difficult to get tenure (or grants, or a first faculty jobs) without publications in elite journals. From the time a young scientist starts a Ph. D. to the time they’re up for tenure is typically thirteen years (or more), at the end of which the no-longer young apprentice might find him or herself out of a job. It is perhaps, in hindsight, no small wonder that some wind up cutting corners. Instead of, for example, rewarding scientists largely for the number of papers they publish—which credits quick, sloppy results that might not be reliable—we might reward scientists to a greater degree for producing solid, trustworthy research that other people are able to successfully replicate and then extend.

The entire article is here.

Monday, July 23, 2012

Uncertainty shrouds psychologist's resignation

Lawrence Sanna departed University of Michigan amid questions over his work from ‘data detective’ Uri Simonsohn.

By Ed Yong
Nature
Originally published July 12, 2012

Uri Simonsohn, the researcher who flagged up questionable data in studies by social psychologist Dirk Smeesters, has revealed the name of a second social psychologist whose data he believes to be suspiciously perfect.

That researcher is Lawrence Sanna, whose former employer, the University of Michigan in Ann Arbor, tells Simonsohn that he resigned his professorship there at the end of May. The reasons for Sanna's resignation are not known, but it followed questions from Simonsohn and a review by Sanna’s previous institution, the University of North Carolina in Chapel Hill (UNC). According to the editor of the Journal of Experimental Social Psychology, Sanna has also asked that three of his papers be retracted from the journal.

In both Smeesters’ and Sanna’s work, odd statistical patterns in the data raised concerns with Simonsohn, at the University of Pennsylvania in Philadelphia. But the similarity between the cases ends there. Smeesters’ resignation was announced on 25 June by his institution, Erasmus University Rotterdam in the Netherlands, which undertook a review and concluded that two of his papers should be retracted. Sanna’s resignation, by contrast, remains mysterious: UNC did not release the results of its review, and the University of Michigan will not explain why Sanna resigned.

The entire story is here.

Wednesday, July 11, 2012

Fraud-Detection Tool Could Shake Up Psychology

By Martin Enserink
ScienceInsider
Originally published July 3, 2012

The most startling thing about the latest scandal to hit social psychology isn’t the alleged violation of scientific ethics itself, scientists say, or the fact that it happened in the Netherlands, the home of fallen research star and serial fraudster Diederik Stapel, whose case shook the field to its core less than a year ago. Instead, what fascinates them most is how the new case, which led to the resignation of psychologist Dirk Smeesters of Erasmus University Rotterdam and the requested retraction of two of his papers by his school, came to light: through an unpublished statistical method to detect data fraud.

The technique was developed by Uri Simonsohn, a social psychologist at the Wharton School of the University of Pennsylvania, who tells Science that he has also notified a U.S. university of a psychology paper his method flagged.

That paper’s main author, too, has been investigated and has resigned, he says. As Science went to press, Simonsohn said he planned to reveal details about his method, and both cases, as early as this week.

If it proves valid, Simonsohn’s technique might find other possible cases of misconduct lurking in the vast body of scientific literature. “There’s a lot of interest in this,” says Brian Nosek of the University of Virginia in Charlottesville, who recently launched an examination of replicability in social psychology findings.


There are other stories about Diederik Stapel on this site.

A New Record for Retractions? (Part 2)

By Dennis Normile
ScienceInsider
Originally published on July 2, 2012

An investigating committee in Japan has concluded that a Japanese anesthesiologist, Yoshitaka Fujii, fabricated a whopping 172 papers over the past 19 years. Among other problems, the panel, set up by the Japanese Society of Anesthesiologists, could find no records of patients and no evidence medication was ever administered.

"It is as if someone sat at a desk and wrote a novel about a research idea," the committee wrote in a 29 June summary report posted in Japanese on the society's Web site.

The fabrications could produce a record number of retractions by a single author if the journals, as seems likely, decide to retract the papers. ScienceInsider was unable to reach Fujii, who had asked the society not to provide the media with his contact information.

The entire story is here.

Thursday, September 8, 2011

Fraud in a Labcoat


By Gareth Cook
The Boston Globe

MARC HAUSER has plenty of company when it comes to scientific misconduct.  (See our prior blog post.)

Hauser, you’ll recall, had built a brilliant career at Harvard. He directed a primate lab and published a long list of scientific papers on topics like the cognitive nature of morality, and the similarities between human and animal behavior. He was a popular teacher, and author of the hit book “Moral Minds.’’ And then it turned out that he was taking liberties with his scientific data. One paper was retracted, others were corrected, and, earlier this month, he left the university.

Also this month, federal authorities announced that a cancer researcher at the Boston University School of Medicine was inventing data. Two papers have been retracted. The scientist, and I use the term loosely, was shown the door.

These two cases are part of a remarkable flood of scientific retractions. Between 2001 and 2010, the number of retractions increased more than 15-fold, according to a recent investigation by the Wall Street Journal. There were 22 retractions in 2001, and 339 last year, according to the Journal, over a period of time when the number of publications increased by only 44 percent.

It would seem a grim development, this sudden scourge of epic sloppiness and outright fraud in the halls of science. But it’s actually news we should all welcome: We are not witnessing an explosion of misconduct, but a new openness about it.

There are some forces, including easy access to image manipulation software like Photoshop, that are making it easier to fake results. But the problem has festered for decades, and now, finally, science is beginning to get serious about dealing with it.

The most spectacular recent case of scientific fraud came out of South Korea. In early 2004, researchers there announced that they had cloned a human cell, earning front-page headlines around the world, and tantalizing the public with the prospect of future disease treatments. Invitations to collaborate poured in from top biologists. The South Korean government ensured that the lead scientist, Hwang Woo-suk, had every resource at his disposal. He was a national hero.

The rest of story can be found here.

Wednesday, August 24, 2011

Retractions Of Scientific Studies Are Surging

By Ed Silverman
http://www.pharmalot.com/

Over the past decade, the number of medical journals that have issued retractions has climbed precipitously. Since 2001, the overall number of papers that were published in research journals increased 44 percent, but at the same time, the number of papers that were retracted climbed more than 15-fold, according to The Wall Street Journal, citing data from Thomson Reuters.

Put another way, there were just 22 retraction notices that appeared in journals 10 years ago, but 139 were published in 2006 and by last year, the number reached 339. Through July of this year, there were a total 210 retractions, according to Thomson Reuters Web of Science, which maintains an index of 11,600 peer-reviewed journals.

Meanwhile, retractions related to fraud rose more than sevenfold between 2004 and 2009, exceeding a twofold rise traced to mistakes, according to an analysis published in the Journal of Medical Ethics. After studying 742 papers that were withdrawn from 2000 to 2010, the analysis found that 73.5 percent were retracted simply for error, but 26.6 percent were retracted for fraud. Ominously, 31.8 percent of retracted papers were not noted as retracted (read the abstract).

The conclusion? Either there is more fraud or more policing? Ivan Oransky, the executive editor of Reuters Health and a co-founder of the Retraction Watch blog that began recently in response to the spate of retractions, writes us that the simple use of eyeballs and software that can detect plagiarism has made it possible to root out bad papers.

He also notes, however, that there are more journals, which explains why there are more papers, in general, being published. “So the question is whether there have been more retractions per paper published,” Oransky writes, and then points to this chart to note that were, indeed, many more.

“That’s really no surprise, given the increasing numbers of eyeballs on studies, and the introduction of plagiarism detection software. It’s unclear whether the actual amount of misconduct and legitimate error has grown; it may just be that we’re picking up on more of it,” he continues. “What makes it difficult to tell is a problem we often see at Retraction Watch: Opaque and unhelpful retraction notices saying only ‘this study was withdrawn by the authors.’ How does that make for transparent science? We think journals can do a lot better, by demanding that authors and institutions come clean about what went wrong.”

And why is there more fraud? As the Wall Street Journal notes, there is a lot to be gained - by both researchers and journal editors - to publish influential papers. “The stakes are so high,” The Lancet editor Richard Horton tells the Journal. “A single paper in Lancet and you get your chair and you get your money. It’s your passport to success.”

The entire story can be read here.

Friday, July 29, 2011

Harvard Psychologist Resigns

The Chronicle of Higher Education
by Tom Bartlett

Marc Hauser, PhD
Marc D. Hauser, the Harvard psychologist found responsible for eight counts of scientific misconduct by the university, has resigned, ending speculation about whether the embattled professor would return to campus this fall.

In a letter dated July 7, Mr. Hauser wrote to Michael D. Smith, Harvard's dean of the Faculty of Arts and Sciences, that he was resigning effective August 1 because he had "some exciting opportunities in the private sector" and that he had been involved in some "extremely interesting and rewarding work focusing on the educational needs of at-risk teenagers."

The letter states that he may return to teaching and research "in the years to come." It does not mention the scandal that damaged his once-stellar reputation and stunned his colleagues in the field.

Last August, The Boston Globe reported that a university investigation had found Mr. Hauser guilty of misconduct, though the nature of that misconduct remained murky. The picture became somewhat clearer after Mr. Smith, the Harvard dean, sent a letter to faculty members saying that Mr. Hauser was "solely responsible" for eight instances of wrongdoing involving three published and five unpublished studies.

An internal document provided last August to The Chronicle by a former research assistant in Mr. Hauser's laboratory revealed how members of the lab believed Mr. Hauser was reporting faulty data and included e-mails demonstrating how he had pushed back when they had brought problems to his attention. Several lab members alerted the university's ombudsman, setting in motion an investigation that would lead to the seizure of computers and documents from Mr. Hauser's laboratory in the fall of 2007.

Read the entire article here.