Rob Curran
Dallas Morning News
Originally posted 20 FEB 26
The next task for AI firms is figuring out how their chatbots work. It might sound like they have put the $500 billion nuclear-powered cart before the horse. But the giant leap forward in generative AI in the 2020s took software engineers by surprise and has left them wondering how the chatbots do what they do, even as their employers go all-in on the technology.
Some of the most outlandish prophecies about AI's power are coming true almost as soon as techno-philosophers are finished making them. It's now almost commonplace for people to fall in love with avatars on their phone. Nobody thinks twice about devoting 6% of national power generation to run these bots' data center brains. And recently, an entrepreneur named Matt Schlicht launched an entire social network exclusively for AI agents, which is now dominated by self-reflecting techno-philosopher bots, some of whom have invented a religion: Crustafarianism.
But the whole AI project is in many ways still in beta testing. We know what the bots do but not how they do it.
'Difficult to understand'
AI doesn't work like traditional software because its output is creative, not rules-bound. If word-processing software renders an "&" every time you type a "g," the engineers find the faulty code and correct the glitch. Just like designing a mousetrap, engineers know what every moving part in a traditional software program does, so that they can easily tweak the design of each cog in the works to adjust the output.
Chatbots are harder to improve (for example, the Internet is not unanimous on whether ChatGPT 5 is superior to the 4 version). Why? Because nobody understands how generative AI chatbots work. Software engineers understand the data and coding inputs, and we can all see chatbots' output. But nobody understands how the parts of the AI mousetrap fit together, industry leaders say.
Here are some thoughts:
Rob Curran highlights a striking paradox at the heart of modern AI: the technology has advanced at a breathtaking pace, yet even its creators don't fully understand how it works.
Unlike traditional software, AI's creative output can't be traced back to specific lines of code, leaving engineers unable to reliably diagnose or improve it. Anthropic's CEO Dario Amodei acknowledged this gap, calling for an "MRI of AI" to solve the interpretability problem, while other industry figures have sounded more alarming warnings about the technology's risks. Curran's broader point is that even as AI remains deeply mysterious, the race to make it more powerful shows no signs of slowing down.
