Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fabrication. Show all posts
Showing posts with label Fabrication. Show all posts

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Thursday, December 10, 2020

Psychologist’s paper retracted after Dutch national body affirms misconduct findings

Adam Marcus
Retraction Watch
Originally posted 23 Nov 20

A cognitive psychologist in Germany has lost one of two papers slated for retraction after her former institution found her guilty of misconduct. 

In a 2019 report, Leiden University found that Lorenza Colzato, now of TU Dresden, had failed to obtain ethics ethics approval for some of her studies, manipulated her data and fabricated results in grant applications. Although the institution did not identify Colzato by name, Retraction Watch confirmed her identity. 

The Leiden report — the conclusions of which were affirmed by the Netherlands Board on Research Integrity last month — called for the retraction of two papers by Colzato and her co-authors, three of whom acted as whistleblowers in the case. As the trio told us in an interview last December: 
We worked with the accused for many years, during which we observed and felt forced to get involved in several bad research practices. These practices would range from small to large violations. Since early on we were aware that this was not OK or normal, and so we tried to stand up to this person early on.

However, we very quickly learned that complaining could only lead to nasty situations such as long and prolonged criticism at a professional and personal level. … But seeing this behavior recurring and steadily escalating, and seeing other people in the situations we had been in, led us to feel like we could no longer stay silent. We had become more independent (despite still working in the same department), and felt like we had to ‘break’ that system. About one year ago, we brought the issues to the attention of the scientific Director of our Institute, who took our story seriously from the beginning. Upon evaluating the evidence, together we decided the Director would file a complaint. Out of fear for retaliation, we initially did not join as formal complainants but eventually gathered the courage to join the complaint and disclose our role.
The now-retracted paper, which Colzato wrote with Laura Steenbergen — a former colleague at Leiden and one of the eventual whistleblowers — was titled “Overweight and cognitive performance: High body mass index is associated with impairment in reactive control during task switching.” It appeared in 2017 in Frontiers in Nutrition

Friday, April 19, 2019

Duke agrees to pay $112.5 million to settle allegation it fraudulently obtained federal research funding

Seth Thomas Gulledge
Triangle Business Journal
Originally posted March 25, 2019

Duke University has agreed to pay $112.5 million to settle a suit with the federal government over allegations the university submitted false research reports to receive federal research dollars.

This week, the university reached a settlement over allegations brought forward by whistleblower Joseph Thomas – a former Duke employee – who alleged that during his time working as a lab research analyst in the pulmonary, asthma and critical care division of Duke University Health Systems, the clinical research coordinator, Erin Potts-Kant, manipulated and falsified studies to receive grant funding.

The case also contends that the university and its office of research support, upon discovering the fraud, knowingly concealed it from the government.

According to court documents, Duke was accused of submitting claims to the National Institute of Health (NIH) and Environmental Protection Agency (EPA) between 2006-2018 that contained "false or fabricated data" cause the two agencies to pay out grant funds they "otherwise would not have." Those fraudulent submissions, the case claims, netted the university nearly $200 million in federal research funding.

“Taxpayers expect and deserve that federal grant dollars will be used efficiently and honestly. Individuals and institutions that receive research funding from the federal government must be scrupulous in conducting research for the common good and rigorous in rooting out fraud,” said Matthew Martin, U.S. attorney for the Middle District of North Carolina in a statement announcing the settlement. “May this serve as a lesson that the use of false or fabricated data in grant applications or reports is completely unacceptable.”

The info is here.

Tuesday, January 17, 2017

Fake news invades science and science journalism as well as politics

By Ivan Oransky and Adam Marcus
STAT News
Originally published December 30, 2016

Here is an excerpt:

Science itself never falls victim to this sort of distortion though, does it? We wish that was so. Take, for example, a conspiracy theory about cloud trails from jet planes that was published in a peer-reviewed journal. How about a study linking vaccines to autism long after such a connection had been thoroughly debunked? That one was published in a public health journal. Or this fake news whopper: HIV doesn’t cause AIDS. A peer-reviewed paper made that claim until it was retracted.

We could go on. But as you’ve gathered by now, science in its current state isn’t exactly keeping us safe from bogus research. Predatory publishers continue to churn out papers for a price, with minimal peer review — or very often no peer review — to vet the results. Unscrupulous researchers use those and other soft spots in the scientific publishing system to get away with presenting wild theories or cooking their data.

Journalists who don’t fact-check deserve criticism, whether the topic is politics, entertainment, or science. But the real trouble with fake news is when there’s a kernel of truth in the pile of garbage. That’s especially problematic in science: scientists continue to dress up weak findings in flashy clothes, all the better to publish with. Then their universities often bolster this flimsy work with frothy press releases that journalists fall for.

The article is here.

Friday, January 22, 2016

'We Didn't Lie,' Volkswagen CEO Says Of Emissions Scandal

Sonari Glinton
NPR.org
Published January 11, 2016

Here is an excerpt:

NPR: You said this was a technical problem, but the American people feel this is not a technical problem, this is an ethical problem that's deep inside the company. How do you change that perception in the U.S.?

Matthias Mueller: Frankly spoken, it was a technical problem. We made a default, we had a ... not the right interpretation of the American law. And we had some targets for our technical engineers, and they solved this problem and reached targets with some software solutions which haven't been compatible to the American law. That is the thing. And the other question you mentioned — it was an ethical problem? I cannot understand why you say that.

NPR: Because Volkswagen, in the U.S., intentionally lied to EPA regulators when they asked them about the problem before it came to light.

Mueller: We didn't lie. We didn't understand the question first. And then we worked since 2014 to solve the problem. And we did it together and it was a default of VW that it needed such a long time.

The entire interview is here.