Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.