The UK AI Safety Institute (AISI) operates on a voluntary basis - it has no statutory authority to compel AI companies to submit models for testing or to enforce safety standards.
Verified: 1 March 2026 · Last updated: 1 March 2026 · Jurisdiction: UK
The AI Safety Institute (AISI), established following the 2023 Bletchley Park AI Safety Summit, conducts pre-deployment testing of frontier AI models. However, this testing is entirely voluntary - companies choose whether to participate.
AISI has no statutory powers to:
- Compel companies to submit models for evaluation
- Block the release of models deemed unsafe
- Impose penalties for non-compliance with safety standards
- Set binding safety thresholds
This means the UK’s primary AI safety body relies on goodwill from the very companies it is supposed to oversee.
Sources (2)
- Primary Source
- Primary Source
AISIenforcementUK policy