As of early 2026, the United Kingdom has no dedicated, binding legislation governing AI safety. The government's approach relies on existing regulators and non-statutory guidance.
Verified: 1 March 2026 · Last updated: 1 March 2026 · Jurisdiction: UK
The UK government has pursued a “pro-innovation” approach to AI regulation, choosing not to introduce new dedicated AI legislation. Instead, it has tasked existing regulators (Ofcom, FCA, CMA, ICO, etc.) with applying existing frameworks to AI within their domains.
This approach means:
- No single body has comprehensive oversight of AI safety
- There are no binding requirements for frontier AI developers to test for dangerous capabilities
- Voluntary commitments made at the AI Safety Summit at Bletchley Park are not enforceable
- The AI Safety Institute (now AISI) has no statutory enforcement powers
Critics argue this leaves significant gaps, particularly for general-purpose AI systems that cut across regulatory boundaries.
Sources (2)
- Primary Source
- Analysis
UK policyregulation gaplegislation