Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 1, 2021

‘Yeah, we’re spooked’: AI starting to have big real-world impact

Nicola K. Davis
The Guardian
Originally posted 29 OCT 21

Here is an excerpt:

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

Have AI researchers become spooked by their own success? “Yeah, I think we are increasingly spooked,” Russell said.

“It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms,” he said, noting that the experts always stressed the idea was theoretical. “And then it happened and they weren’t ready for it.”

The use of AI in military applications – such as small anti-personnel weapons – is of particular concern, he said. “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city,” said Russell.

Russell believes the future for AI lies in developing machines that know the true objective is uncertain, as are our preferences, meaning they must check in with humans – rather like a butler – on any decision. But the idea is complex, not least because different people have different – and sometimes conflicting – preferences, and those preferences are not fixed.

Russell called for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias. He said EU legislation that would ban impersonation of humans by machines should be adopted around the world.