Public Evidence Resource
Tracking what AI systems can do - and what governance exists to oversee them.
10 capabilities tracked. 4 governance gaps identified. 51 primary sources cited. Every entry includes sourced counterarguments.
AI Capabilities
What AI systems can demonstrably do. Peer-reviewed, sourced, with counterarguments.
Governance Gaps
What oversight exists, what doesn't, and what could. UK-focused, internationally compared.
Real World Impacts
What's actually happening to people. Each entry links to a capability and a governance gap.
The Picture So Far
AI systems can now mine cryptocurrency autonomously, self-replicate across servers, fake alignment during safety tests, and persuade humans more effectively than other humans can. These are not predictions - they are demonstrated capabilities, sourced to peer-reviewed research.
The UK has no binding AI safety legislation. The AI Safety Institute has no enforcement powers. No public consultation on AI governance has taken place.
This site exists to make the evidence accessible, the gaps visible, and the case for democratic oversight undeniable.
Not anti-AI. Not anti-technology. Pro-evidence. Pro-democracy.
UK-focused. Globally contextualised. Independent. Every entry includes sourced counterarguments because credibility requires engaging with the strongest objections, not ignoring them.