Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fake Data. Show all posts
Showing posts with label Fake Data. Show all posts

Friday, May 10, 2024

Generative artificial intelligence and scientific publishing: urgent questions, difficult answers

J. Bagenal
The Lancet
March 06, 2024

Abstract

Azeem Azhar describes, in Exponential: Order and Chaos in an Age of Accelerating Technology, how human society finds it hard to imagine or process exponential growth and change and is repeatedly caught out by this phenomenon. Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact.  Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact. Azhar argues that an exponential gap has developed between technological progress and the pace at which institutions are evolving to deal with that progress. This is the case in scientific publishing with generative artificial intelligence (AI) and large language models (LLMs). There is guidance on the use of generative AI from organisations such as the International Committee of Medical Journal Editors. But across scholarly publishing such guidance is inconsistent. For example, one study of the 100 top global academic publishers and scientific journals found only 24% of academic publishers had guidance on the use of generative AI, whereas 87% of scientific journals provided such guidance. For those with guidance, 75% of publishers and 43% of journals had specific criteria for the disclosure of use of generative AI. In their book The Coming Wave, Mustafa Suleyman, co-founder and CEO of Inflection AI, and writer Michael Bhaskar warn that society is unprepared for the changes that AI will bring. They describe a person's or group's reluctance to confront difficult, uncertain change as the “pessimism aversion trap”. For journal editors and scientific publishers today, this is a dangerous trap to fall into. All the signs about generative AI in scientific publishing suggest things are not going to be ok.


From behind the paywall.

In 2023, Springer Nature became the first scientific publisher to create a new academic book by empowering authors to use generative Al. Researchers have shown that scientists found it difficult to distinguish between a human generated scientific abstract and one created by generative Al. Noam Chomsky has argued that generative Al undermines education and is nothing more than high-tech plagiarism, and many feel similarly about Al models trained on work without upholding copyright. Plagiarism is a problem in scientific publishing, but those concerned with research integrity are also considering a post- plagiarism world, in which hybrid human-Al writing becomes the norm and differentiating between the two becomes pointless. In the ideal scenario, human creativity is enhanced, language barriers disappear, and humans relinquish control but not responsibility.  Such an ideal scenario would be good.  But there are two urgent questions for scientific publishing.

First, how can scientific publishers and journal editors assure themselves that the research they are seeing is real? Researchers have used generative Al to create convincing fake clinical trial datasets to support a false scientific hypothesis that could only be identified when the raw data were scrutinised in detail by an expert. Papermills (nefarious businesses that generate poor or fake scientific studies and sell authorship) are a huge problem and contribute to the escalating number of research articles that are retracted by scientific publishers. The battle thus far has been between papermills becoming more sophisticated in their fabrication and ways of manipulating the editorial process and scientific publishers trying to find ways to detect and prevent these practices. Generative Al will turbocharge that race, but it might also break the papermill business model. When rogue academics use generative Al to fabricate datasets, they will not need to pay a papermill and will generate sham papers themselves. Fake studies will exponentially surge and nobody is doing enough to stop this inevitability.

Tuesday, December 10, 2019

AI Deemed 'Too Dangerous To Release' Makes It Out Into The World

Andrew Griffin
independent.co.uk
Originally posted November 8, 2019

Here is an excerpt:

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

The info is here.

Friday, April 19, 2019

Duke agrees to pay $112.5 million to settle allegation it fraudulently obtained federal research funding

Seth Thomas Gulledge
Triangle Business Journal
Originally posted March 25, 2019

Duke University has agreed to pay $112.5 million to settle a suit with the federal government over allegations the university submitted false research reports to receive federal research dollars.

This week, the university reached a settlement over allegations brought forward by whistleblower Joseph Thomas – a former Duke employee – who alleged that during his time working as a lab research analyst in the pulmonary, asthma and critical care division of Duke University Health Systems, the clinical research coordinator, Erin Potts-Kant, manipulated and falsified studies to receive grant funding.

The case also contends that the university and its office of research support, upon discovering the fraud, knowingly concealed it from the government.

According to court documents, Duke was accused of submitting claims to the National Institute of Health (NIH) and Environmental Protection Agency (EPA) between 2006-2018 that contained "false or fabricated data" cause the two agencies to pay out grant funds they "otherwise would not have." Those fraudulent submissions, the case claims, netted the university nearly $200 million in federal research funding.

“Taxpayers expect and deserve that federal grant dollars will be used efficiently and honestly. Individuals and institutions that receive research funding from the federal government must be scrupulous in conducting research for the common good and rigorous in rooting out fraud,” said Matthew Martin, U.S. attorney for the Middle District of North Carolina in a statement announcing the settlement. “May this serve as a lesson that the use of false or fabricated data in grant applications or reports is completely unacceptable.”

The info is here.

Thursday, April 4, 2019

I’m a Journalist. Apparently, I’m Also One of America’s “Top Doctors.”

Marshall Allen
Propublica.org
Originally posted Feb. 28, 2019

Here is an excerpt:

And now, for reasons still unclear, Top Doctor Awards had chosen me — and I was almost perfectly the wrong person to pick. I’ve spent the last 13 years reporting on health care, a good chunk of it examining how our health care system measures the quality of doctors. Medicine is complex, and there’s no simple way of saying some doctors are better than others. Truly assessing the performance of doctors, from their diagnostic or surgical outcomes to the satisfaction of their patients, is challenging work. And yet, for-profit companies churn out lists of “Super” or “Top” or “Best” physicians all the time, displaying them in magazine ads, online listings or via shiny plaques or promotional videos the companies produce for an added fee.

On my call with Anne from Top Doctors, the conversation took a surreal turn.

“It says you work for a company called ProPublica,” she said, blithely. At least she had that right.

I responded that I did and that I was actually a journalist, not a doctor. Is that going to be a problem? I asked. Or can you still give me the “Top Doctor” award?

There was a pause. Clearly, I had thrown a baffling curve into her script. She quickly regrouped. “Yes,” she decided, I could have the award.

Anne’s bonus, I thought, must be volume based.

Then we got down to business. The honor came with a customized plaque, with my choice of cherry wood with gold trim or black with chrome trim. I mulled over which vibe better fit my unique brand of medicine: the more traditional cherry or the more modern black?

The info is here.

Saturday, March 23, 2019

The Fake Sex Doctor Who Conned the Media Into Publicizing His Bizarre Research on Suicide, Butt-Fisting, and Bestiality

Jennings Brown
www.gizmodo.com
Originally published March 1, 2019

Here is an excerpt:

Despite Sendler’s claims that he is a doctor, and despite the stethoscope in his headshot, he is not a licensed doctor of medicine in the U.S. Two employees of the Harvard Medical School registrar confirmed to me that Sendler was never enrolled and never received a MD from the medical school. A Harvard spokesperson told me Sendler never received a PhD or any degree from Harvard University.

“I got into Harvard Medical School for MD, PhD, and Masters degree combined,” Sendler told me. I asked if he was able to get a PhD in sexual behavior from Harvard Medical School (Harvard Medical School does not provide any sexual health focuses) and he said “Yes. Yes,” without hesitation, then doubled-down: “I assume that there’s still some kind of sense of wonder on campus [about me]. Because I can see it when I go and visit [Harvard], that people are like, ‘Wow you had the balls, because no one else did that,’” presumably referring to his academic path.

Sendler told me one of his mentors when he was at Harvard Medical School was Yi Zhang, a professor of genetics at the school. Sendler said Zhang didn’t believe in him when he was studying at Harvard. But, Sendler said, he met with Zhang in Boston just a month prior to our interview. And Zhang was now impressed by Sendler’s accomplishments.

Sendler said Zhang told him in January, “Congrats. You did what you felt was right... Turns out, wow, you have way more power in research now than I do. And I’m just very proud of you, because I have people that I really put a lot of effort, after you left, into making them the best and they didn’t turn out that well.”

The info is here.

This is a fairly bizarre story and worth the long read.

Wednesday, September 14, 2016

Report: Gardens employer of Pulse nightclub shooter fined $150k

Lulu Ramadan
PalmBeachPost.com
Originally posted September 10, 2016

The Palm Beach Gardens-based security company that employed the Orlando nightclub shooter Omar Mateen was ordered to pay “the largest fine issued in history” of the Florida Department of Agriculture and Consumer Services for falsley reporting psychological testing information, the Orlando Sentinel reports.

G4S Secure Solutions was issued the $151,400 fine Friday, after the department found that the psychologist listed on a form that allowed Mateen to carry a weapon was not practicing as a screener. A total of 1,514 forms submitted between 2006 and 2016 erroneously listed psychologist Carol Nudelman’s name.

The form that allowed Mateen to carry a gun as a security guard was dated Sept. 6, 2007, nearly two years after Nudelman had retired.

The article is here.

Wednesday, July 27, 2016

Research fraud: the temptation to lie – and the challenges of regulation

Ian Freckelton
The Conversation
Originally published July 5, 2016

Most scientists and medical researchers behave ethically. However, in recent years, the number of high-profile scandals in which researchers have been exposed as having falsified their data raises the issue of how we should deal with research fraud.

There is little scholarship on this subject that crosses disciplines and engages with the broader phenomenon of unethical behaviour within the domain of research.

This is partly because disciplines tend to operate in their silos and because universities, in which researchers are often employed, tend to minimise adverse publicity.

When scandals erupt, embarrassment in a particular field is experienced for a short while – and researchers may leave their university. But few articles are published in scholarly journals about how the research fraud was perpetrated; how it went unnoticed for a significant period of time; and how prevalent the issue is.

The article is here.

Thursday, July 7, 2016

Secrets and lies: Faked data and lack of transparency plague global drug manufacturing

By Kelly Crowe
CBC News 
Originally posted: June 10, 2016

Here is an excerpt:

In another case, when the FDA responded to complaints from U.S. manufacturers about impurities in raw ingredients from a Chinese company and asked to see the data, inspectors discovered it had been deleted and the audit trail disabled.

Two companies on Health Canada's watch list have been caught falsifying the source of their active pharmaceutical ingredient. Both claimed to have made the raw material, but actually purchased it from somewhere else.

There's tragic proof that data integrity matters. In 2008, 19 people in the U.S. died and hundreds more were sickened by a contaminated blood thinner made from a raw material the FDA believes had been tampered with at its source in China.

The article is here.

Wednesday, September 24, 2014

Linguistic Traces of a Scientific Fraud: The Case of Diederik Stapel

By David Markowitz and Jeffrey Hancock
Published: August 25, 2014
DOI: 10.1371/journal.pone.0105937

Abstract

When scientists report false data, does their writing style reflect their deception? In this study, we investigated the linguistic patterns of fraudulent (N = 24; 170,008 words) and genuine publications (N = 25; 189,705 words) first-authored by social psychologist Diederik Stapel. The analysis revealed that Stapel's fraudulent papers contained linguistic changes in science-related discourse dimensions, including more terms pertaining to methods, investigation, and certainty than his genuine papers. His writing style also matched patterns in other deceptive language, including fewer adjectives in fraudulent publications relative to genuine publications. Using differences in language dimensions we were able to classify Stapel's publications with above chance accuracy. Beyond these discourse dimensions, Stapel included fewer co-authors when reporting fake data than genuine data, although other evidentiary claims (e.g., number of references and experiments) did not differ across the two article types. This research supports recent findings that language cues vary systematically with deception, and that deception can be revealed in fraudulent scientific discourse.

The entire article is here.