Public Evidence Resource
Tracking what AI systems can do - and what governance exists to oversee them.
10 capabilities tracked. 6 governance gaps identified. 78 primary sources cited. Every entry includes sourced counterarguments.
Evidence Database
30 entries tracking AI capabilities, incidents, governance gaps, and the global landscape. Every claim sourced and rated for confidence.
Methodology
Our verification standards, confidence levels, and counterargument policy. Transparency about how entries are sourced and rated.
Advocacy
Template letters for MPs, a model council motion, and political strategy. Practical tools for democratic engagement.
The Picture So Far
AI systems can now mine cryptocurrency autonomously, self-replicate across servers, fake alignment during safety tests, and persuade humans more effectively than other humans can. These are not predictions - they are demonstrated capabilities, sourced to peer-reviewed research.
The UK has no binding AI safety legislation. The AI Safety Institute has no enforcement powers. No public consultation on AI governance has taken place.
This site exists to make the evidence accessible, the gaps visible, and the case for democratic oversight undeniable.
Not anti-AI. Not anti-technology. Pro-evidence. Pro-democracy.
UK-focused. Globally contextualised. Independent. Every entry includes sourced counterarguments because credibility requires engaging with the strongest objections, not ignoring them.