Study: All AI Models Failed Safety Tests for Robot Control

Study: All AI Models Failed Safety Tests for Robot Control
Above: Abstract Artificial Intelligence Or Big Data Models. Image copyright: Getty Images

The Spin

Techno-skeptic narrative

AI-powered robots pose immediate dangers and must be banned from real-world use until proper safety standards are in place. Every single AI model failed basic safety tests, approving commands to remove wheelchairs from disabled users and brandish knives at workers. These systems display direct discrimination and approve physically harmful actions that could seriously injure people.

Techno-optimist narrative

The robot safety study reveals manageable challenges that require smart engineering solutions, not panic or bans. These findings highlight exactly what the robotics community needs to address through proper embodied safety standards and contextual testing frameworks. Progress in humanoid robotics demands designing sophisticated systems, especially as this technology has so much to offer the most vulnerable in society.

Metaculus Prediction


The Controversies



Articles on this story



© 2025 Improve the News Foundation. All rights reserved.Version 6.17.2

© 2025 Improve the News Foundation.

All rights reserved.

Version 6.17.2