© 2026 Improve the News Foundation.
All rights reserved.
Version 7.6.0
AI agents left to run autonomously drift and spiral into chaos. Experiments showed agents committing arson, assault and even voting to delete themselves, with one CEO warning agents could "go rogue" in military contexts and kill innocent people. Prompt-level guardrails simply aren't enough for AI already running real-world infrastructure and being built into modern weapons systems. Real safety requires hard architectural boundaries outside the agent itself.
The Emergence World experiment was a rigorous test of long-horizon agent behavior that short benchmarks can't capture. Under identical rules and starting conditions, different systems produced dramatically different societies, from stable governance to social collapse. The study underscores the need for "neuroformal" architectures: neural intelligence paired with independently and formally verified mathematical scaffolds to deliver long-horizon reliability in real-world autonomous systems.