Increasingly sophisticated AI and the rising potential that their human handlers will lose control pose a very real danger. Given that this is an existential threat, on par with nuclear war or another pandemic, AI companies must calculate the percentage risk that matters could spiral out of control to properly understand the jeopardy posed by their models. Only then will there be the political will to impose the safety standards that are desperately needed.
The claim that AI poses an extinction-level threat is greatly exaggerated. While not impossible, such a scenario remains highly improbable, especially when compared to far more likely existential risks like nuclear war, climate change, or future pandemics. Alarmist rhetoric not only dissuades people from adopting this revolutionary technology but also distracts from the real, present dangers of AI, such as its role in spreading disinformation and enabling fraud.