Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Research Integrity. Show all posts
Showing posts with label Research Integrity. Show all posts

Sunday, September 29, 2024

Whistleblowing in science: this physician faced ostracization after standing up to pharma

Sara Reardon
nature.com
Originally posted 20 Aug 24

The image of a lone scientist standing up for integrity against a pharmaceutical giant seems romantic and compelling. But to haematologist Nancy Olivieri, who went public when the company sponsoring her drug trial for a genetic blood disorder tried to suppress data about harmful side effects, the experience was as unglamorous as it was damaging and isolating. “There’s a lot of people who fight for justice in research integrity and against the pharmaceutical industry, but very few people know what it’s like to take on the hospital administrators” too, she says.

Now, after more than 30 years of ostracization by colleagues, several job losses and more than 20 lawsuits — some of which are ongoing — Olivieri is still amazed that what she saw as efforts to protect her patients could have proved so controversial, and that so few people took her side. Last year, she won the John Maddox Prize, a partnership between the London-based charity Sense about Science and Nature, which recognizes “researchers who stand up and speak out for science” and who achieve changes amid hostility. “It’s absolutely astounding to me that you could become famous as a physician for saying, ‘I think there might be a complication here,’” she says. “There was a lot of really good work that we could have done that we wasted a lot of years not doing because of all this.”

Olivieri didn’t set out to be a troublemaker. As a young researcher at the University of Toronto (UT), Canada, in the 1980s, she worked with children with thalassaemia — a blood condition that prevents the body from making enough oxygen-carrying haemoglobin, and that causes a fatal build-up of iron in the organs if left untreated. She worked her way up to become head of the sickle-cell-disease programme at the city’s Hospital for Sick Children (SickKids). In 1989, she started a clinical trial at SickKids to test a drug called deferiprone that traps iron in the blood. The hospital eventually brought in a pharmaceutical company called Apotex, based in Toronto, Canada, to co-sponsor the study as part of regulatory requirements.


Here are some thoughts:

The case of Nancy Olivieri, a haematologist who blew the whistle on a pharmaceutical company's attempts to suppress data about harmful side effects of a drug, highlights the challenges and consequences faced by researchers who speak out against industry and institutional pressures. Olivieri's experience demonstrates how institutions can turn against researchers who challenge industry interests, leading to isolation, ostracization, and damage to their careers. Despite the risks, Olivieri's story emphasizes the crucial role of allies and support networks in helping whistle-blowers navigate the challenges they face.

The case also underscores the importance of maintaining research integrity and transparency, even in the face of industry pressure. Olivieri's experience shows that prioritizing patient safety and well-being over industry interests is critical, and institutions must be held accountable for their actions. Additionally, the significant emotional toll that whistle-blowing can take on individuals, including anxiety, isolation, and disillusionment, must be acknowledged.

To address these issues, policy reforms are necessary to protect researchers from retaliation and ensure that they can speak out without fear of retribution. Industry transparency is also essential to minimize conflicts of interest. Furthermore, institutions and professional organizations must establish support networks for researchers who speak out against wrongdoing.

Sunday, September 22, 2024

The staggering death toll of scientific lies

Kelsey Piper
vox.com
Originally posted 23 Aug 24

Here is an excerpt:

The question of whether research fraud should be a crime

In some cases, research misconduct may be hard to distinguish from carelessness.

If a researcher fails to apply the appropriate statistical correction for multiple hypothesis testing, they will probably get some spurious results. In some cases, researchers are heavily incentivized to be careless in these ways by an academic culture that puts non-null results above all else (that is, rewarding researchers for finding an effect even if it is not a methodologically sound one, while being unwilling to publish sound research if it finds no effect).

But I’d argue it’s a bad idea to prosecute such behavior. It would produce a serious chilling effect on research, and likely make the scientific process slower and more legalistic — which also results in more deaths that could be avoided if science moved more freely.

So the conversation about whether to criminalize research fraud tends to focus on the most clear-cut cases: intentional falsification of data. Elisabeth Bik, a scientific researcher who studies fraud, made a name for herself by demonstrating that photographs of test results in many medical journals were clearly altered. That’s not the kind of thing that can be an innocent mistake, so it represents something of a baseline for how often manipulated data is published.

While technically some scientific fraud could fall under existing statutes that prohibit lying on, say, a grant application, in practice scientific fraud is more or less never prosecuted. Poldermans eventually lost his job in 2011, but most of his papers weren’t even retracted, and he faced no further consequences.


Here are some thoughts:

The case of Don Poldermans, a cardiologist who falsified data, resulting in thousands of deaths, highlights the severe consequences of scientific misconduct. This instance demonstrates how fraudulent research can have devastating effects on patients' lives. The fact that Poldermans' data was found to be fake, yet his research was still widely accepted and implemented, raises serious concerns about the accountability and oversight within the scientific community.

The current consequences for scientific fraud are often inadequate, allowing perpetrators to go unpunished or face minimal penalties. This lack of accountability creates an environment where misconduct can thrive, putting lives at risk. In Poldermans' case, he lost his job but faced no further consequences, despite the severity of his actions.

Prosecution or external oversight could provide the necessary accountability and shift incentives to address misconduct. However, prosecution is a blunt tool and may not be the best solution. Independent scientific review boards could also be effective in addressing scientific fraud. Ultimately, building institutions within the scientific community to police misconduct has had limited success, suggesting a need for external institutions to play a role.

The need for accountability and consequences for scientific fraud cannot be overstated. It is essential to prevent harm and ensure the integrity of research. By implementing measures to address misconduct, we can protect patients and maintain trust in the scientific community. The Poldermans case serves as a stark reminder of the importance of addressing scientific fraud and ensuring accountability.

Friday, July 12, 2024

Why Scientific Fraud Is Suddenly Everywhere

Kevin T. Dugan
New York Magazine
Originally posted 21 May 24

Junk science has been forcing a reckoning among scientific and medical researchers for the past year, leading to thousands of retracted papers. Last year, Stanford president Marc Tessier-Lavigne resigned amid reporting that some of his most high-profile work on Alzheimer’s disease was at best inaccurate. (A probe commissioned by the university’s board of trustees later exonerated him of manipulating the data).

But the problems around credible science appear to be getting worse. Last week, scientific publisher Wiley decided to shutter 19 scientific journals after retracting 11,300 sham papers. There is a large-scale industry of so-called “paper mills” that sell fictive research, sometimes written by artificial intelligence, to researchers who then publish it in peer-reviewed journals — which are sometimes edited by people who had been placed by those sham groups. Among the institutions exposing such practices is Retraction Watch, a 14-year-old organization co-founded by journalists Ivan Oransky and Adam Marcus. I spoke with Oransky about why there has been a surge in fake research and whether fraud accusations against the presidents of Harvard and Stanford are actually good for academia.

I’ll start by saying that paper mills are not the problem; they are a symptom of the actual problem. Adam Marcus, my co-founder, had broken a really big and frightening story about a painkiller involving scientific fraud, which led to dozens of retractions. That’s what got us interested in that. There were all these retractions, far more than we thought but far fewer than there are now. Now, they’re hiding in plain sight.


Here are some thoughts:

Recent headlines might suggest a surge in scientific misconduct. However, it's more likely that increased awareness and stricter scrutiny are uncovering existing issues. From an ethical standpoint, the pressure to publish groundbreaking research can create a challenging environment. Publication pressure, coupled with the human tendency towards confirmation bias, can incentivize researchers to take unethical shortcuts that align data with their hypotheses. This can have a ripple effect, potentially undermining the entire scientific process.

Fortunately, the heightened focus on research integrity presents an opportunity for positive change. Initiatives promoting open science practices, such as data sharing and robust replication studies, can foster greater transparency. Furthermore, cultivating a culture that rewards ethical research conduct and whistleblowing, even in the absence of earth-shattering results, is crucial.  Science thrives on self-correction. By acknowledging these challenges and implementing solutions, the scientific community can safeguard the integrity of research and ensure continued progress.

Sunday, April 12, 2020

On the Willingness to Report and the Consequences of Reporting Research Misconduct: The Role of Power Relations.

Horbach, S.P.J.M., et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00202-8

Abstract

While attention to research integrity has been growing over the past decades, the processes of signalling and denouncing cases of research misconduct remain largely unstudied. In this article, we develop a theoretically and empirically informed understanding of the causes and consequences of reporting research misconduct in terms of power relations. We study the reporting process based on a multinational survey at eight European universities (N = 1126). Using qualitative data that witnesses of research misconduct or of questionable research practices provided, we aim to examine actors’ rationales for reporting and not reporting misconduct, how they report it and the perceived consequences of reporting. In particular we study how research seniority, the temporality of work appointments, and gender could impact the likelihood of cases being reported and of reporting leading to constructive organisational changes. Our findings suggest that these aspects of power relations play a role in the reporting of research misconduct. Our analysis contributes to a better understanding of research misconduct in an academic context. Specifically, we elucidate the processes that affect researchers’ ability and willingness to report research misconduct, and the likelihood of universities taking action. Based on our findings, we outline specific propositions that future research can test as well as provide recommendations for policy improvement.

From the Conclusion:

We also find that contested forms of misconduct (e.g. authorship, cherry picking of data and fabrication of data) are less likely to be reported than more clear-cut instances of misconduct (e.g. plagiarism, text recycling and falsification of data). The respondents mention that minor misbehaviour is not considered worth reporting, or express doubts about the effectiveness of reporting a case when the witnessed behaviour does not explicitly transgress norms, such as with many of the QRPs. Concern about reporting’s negative consequences, such as career opportunities or organisational reputations being harmed, is always taken into considerations.

Secondly, we have theorised the relationship between power differences and researchers’ willingness to report—in particular the role of seniority, work appointments and gender. We have derived a list of seven propositions that we believe warrant testing and refinement in future studies using a larger sample to help with further theory building about power differences and research misconduct.

The info is here.