Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Sabotage. Show all posts
Showing posts with label Sabotage. Show all posts

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Saturday, February 18, 2017

A Crime in the Cancer Lab

Theodora Ross
The New York Times
Originally published January 28, 2017

Here is an excerpt:

We have all read about incidents of scientific misconduct; in recent years, a number of manuscripts based on fake research have been retracted. But they usually involved scientists who cut corners or fabricated data, not deliberate sabotage. The poisoned flasks were a first for me. Falsified data is a crime against scientific truth. This was personal.

I turned to my colleagues to ask how to respond, and to my surprise, they all said the same thing: my student, Heather Ames, was probably sabotaging herself.

Their reasoning? She wanted an excuse for why things weren't working in her experiments. Competition and the pressure to get results quickly is ever-present in the world of biomedical research, so it's not out of the question that a young scientist might succumb to the stress.

The article is here.