Public Evidence Resource

Tracking what AI systems can do - and what governance exists to oversee them.

10 capabilities tracked. 6 governance gaps identified. 78 primary sources cited. Every entry includes sourced counterarguments.

3 capabilities demonstrated in the real world 5 demonstrated in lab settings 6 governance gaps identified in the UK
30
Entries
10
Capabilities
2
Incidents
6
Governance Gaps
12
Landscape
78
Sources
Last updated: 17 Mar 2026

The Picture So Far

AI systems can now mine cryptocurrency autonomously, self-replicate across servers, fake alignment during safety tests, and persuade humans more effectively than other humans can. These are not predictions - they are demonstrated capabilities, sourced to peer-reviewed research.

The UK has no binding AI safety legislation. The AI Safety Institute has no enforcement powers. No public consultation on AI governance has taken place.

This site exists to make the evidence accessible, the gaps visible, and the case for democratic oversight undeniable.

Not anti-AI. Not anti-technology. Pro-evidence. Pro-democracy.

UK-focused. Globally contextualised. Independent. Every entry includes sourced counterarguments because credibility requires engaging with the strongest objections, not ignoring them.