Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 6, 2026

How malicious AI swarms can threaten democracy

Schroeder, D. T., et al. (2026).
Science, 391(6783), 354–357.

Abstract

Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level (1). Large language models (LLMs) and autonomous agents (2) let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility (3) and inexpensively create falsehoods that are rated as more human-like than those written by humans (3, 4). Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multiagent architectures (2), these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.


Here are some thoughts:

The article argues that combining LLMs with multiagent architectures creates "malicious AI swarms" — a major leap beyond older botnets. These swarms can autonomously coordinate thousands of AI personas, precisely target vulnerable communities, mimic human behavior to evade detection, self-optimize in real time, and maintain persistent influence over long periods. The democratic harms are wide-ranging: fabricated consensus, deepened social fragmentation, contaminated AI training data, coordinated harassment, and eroded institutional trust that could make authoritarian measures seem acceptable. The authors call for a multilayered defense — continuous detection systems, user-facing "AI shields," stronger cryptographic identity standards, and a global AI Influence Observatory — while emphasizing that voluntary compliance will fall short as long as platforms' commercial incentives reward the same engagement dynamics that swarms exploit.

Monday, May 4, 2026

Exploring spiking neural networks for deep reinforcement learning in robotic tasks

Zanatta, L., et al. (2024).
Scientific Reports, 14(1), 30648. 

Abstract

Spiking Neural Networks (SNNs) stand as the third generation of Artificial Neural Networks (ANNs), mirroring the functionality of the mammalian brain more closely than their predecessors. Their computational units, spiking neurons, characterized by Ordinary Differential Equations (ODEs), allow for dynamic system representation, with spikes serving as the medium for asynchronous communication among neurons. Due to their inherent ability to capture input dynamics, SNNs hold great promise for deep networks in Reinforcement Learning (RL) tasks. Deep RL (DRL), and in particular Proximal Policy Optimization (PPO) has been proven to be valuable for training robots due to the difficulty in creating comprehensive offline datasets that capture all environmental features. DRL combined with SNNs offers a compelling solution for tasks characterized by temporal complexity. In this work, we study the effectiveness of SNNs on DRL tasks leveraging a novel framework we developed for training SNNs with PPO in the Isaac Gym simulator implemented using the skrl library. Thanks to its significantly faster training speed compared to available SNN DRL tools, the framework allowed us to: (i) Perform an effective exploration of SNN configurations for DRL robotic tasks; (ii) Compare SNNs and ANNs for various network configurations such as the number of layers and neurons. Our work demonstrates that in DRL tasks the optimal SNN topology has a lower number of layers than ANN and we highlight how the state-of-art SNN architectures used in complex RL tasks, such as Ant, SNNs have difficulties fully leveraging deeper layers. Finally, we applied the best topology identified thanks to our Isaac Gym-based framework on Ant-v4 benchmark running on MuJoCo simulator, exhibiting a performance improvement by a factor of 4.4x over the state-of-art SNN trained on the same task.

Here are some thoughts:

This paper asks whether a more brain-like type of AI (called a Spiking Neural Network (SNN)) can be used to train robots to move and balance themselves. The alternative is the conventional artificial neural network (ANN) that powers most of today's AI.

Training SNNs for robotics used to take around 3 hours and 20 minutes per experiment. The authors built a new framework called SpikeGym, which cut that down to about 7 minutes by running thousands of simulated environments simultaneously on a GPU. 

The results revealed an interesting and important asymmetry between the two network types. ANNs get better as you add more layers — deeper networks learn richer representations. SNNs, by contrast, actually get worse with more layers. A single-layer SNN consistently outperformed deeper SNN architectures, and this held true across multiple tasks and training methods. 

SNNs are promising but face a real obstacle: they don't scale well with depth the way conventional networks do. The authors argue this is a solvable problem, likely rooted in how gradients are approximated during training, and they release their framework openly to help the research community dig into it further.

Friday, May 1, 2026

No one knows how AI works. Seriously

Rob Curran
Dallas Morning News
Originally posted 20 FEB 26

The next task for AI firms is figuring out how their chatbots work. It might sound like they have put the $500 billion nuclear-powered cart before the horse. But the giant leap forward in generative AI in the 2020s took software engineers by surprise and has left them wondering how the chatbots do what they do, even as their employers go all-in on the technology.

Some of the most outlandish prophecies about AI's power are coming true almost as soon as techno-philosophers are finished making them. It's now almost commonplace for people to fall in love with avatars on their phone. Nobody thinks twice about devoting 6% of national power generation to run these bots' data center brains. And recently, an entrepreneur named Matt Schlicht launched an entire social network exclusively for AI agents, which is now dominated by self-reflecting techno-philosopher bots, some of whom have invented a religion: Crustafarianism.

But the whole AI project is in many ways still in beta testing. We know what the bots do but not how they do it.

'Difficult to understand'

AI doesn't work like traditional software because its output is creative, not rules-bound. If word-processing software renders an "&" every time you type a "g," the engineers find the faulty code and correct the glitch. Just like designing a mousetrap, engineers know what every moving part in a traditional software program does, so that they can easily tweak the design of each cog in the works to adjust the output.

Chatbots are harder to improve (for example, the Internet is not unanimous on whether ChatGPT 5 is superior to the 4 version). Why? Because nobody understands how generative AI chatbots work. Software engineers understand the data and coding inputs, and we can all see chatbots' output. But nobody understands how the parts of the AI mousetrap fit together, industry leaders say.


Here are some thoughts:

Rob Curran highlights a striking paradox at the heart of modern AI: the technology has advanced at a breathtaking pace, yet even its creators don't fully understand how it works. 

Unlike traditional software, AI's creative output can't be traced back to specific lines of code, leaving engineers unable to reliably diagnose or improve it. Anthropic's CEO Dario Amodei acknowledged this gap, calling for an "MRI of AI" to solve the interpretability problem, while other industry figures have sounded more alarming warnings about the technology's risks. Curran's broader point is that even as AI remains deeply mysterious, the race to make it more powerful shows no signs of slowing down.