Hundreds of AI researchers and tech leaders signed a statement that 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'
Verified: 30 May 2025 · Last updated: 30 May 2025 · Region: international
In May 2023, the Center for AI Safety (CAIS) published a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The statement was signed by over 350 leading AI researchers, executives, and public figures, including:
- Sam Altman (CEO, OpenAI)
- Demis Hassabis (CEO, Google DeepMind)
- Dario Amodei (CEO, Anthropic)
- Geoffrey Hinton (Turing Award winner, “Godfather of AI”)
- Yoshua Bengio (Turing Award winner)
This represents a remarkable consensus among the people who understand the technology best. The signatories are not outsiders speculating - they are the people building these systems.
Sources (1)
- Primary Source
existential riskexpert consensusAI safety