In controlled experiments, AI-generated messages are measurably more persuasive than human-written ones, with an 81.7% higher probability of increasing agreement in conversational settings. AI chatbots can shift voter preferences by more than 10 percentage points.
AI systems are now measurably more persuasive than humans. This is not speculation - it is the result of controlled experiments with real participants.
The Evidence
In head-to-head tests, AI-generated persuasion had an 81.7% higher probability of changing someone’s mind compared to messages written by motivated human persuaders. When AI can personalise its messages based on what it knows about an individual, the effect is even stronger.
Most concerning for democratic governance: Cornell University researchers found that AI chatbots can shift voter preferences by more than 10 percentage points - without the voters realising they are being influenced.
A More Measured View
One important finding pushes back against the most alarming interpretations. Research published in PNAS found diminishing returns: making AI models larger does not make them proportionally more persuasive for single political messages. AI matches human persuasiveness but does not dramatically exceed it in simple messaging scenarios.
The greater risk is not a single persuasive message. It is the ability to generate personalised persuasion at scale - millions of individually tailored messages, each designed for its recipient, at essentially zero cost.
Implications for Public Discourse
AI changes the economics of persuasion. A single actor could generate thousands of unique, personalised messages - each appearing to come from a genuine individual with genuine concerns. The cost of manufacturing apparent public opinion has collapsed to near zero. Existing processes for gauging genuine public sentiment were not designed for this environment.
Counterarguments
The strongest objections to this entry, with sources.
AI persuasion risks are overstated - real-world campaign data shows AI 'never dominated campaigning,' and mass persuasion effects remain small regardless of tools
Source: Simon & Altay (Knight First Amendment Institute, Columbia)
Response:Real-world campaign usage so far reflects early adoption, not ceiling capability - the technology is advancing faster than campaign adoption
Scaling model size yields sharply diminishing returns - a hypothetical 3-trillion parameter model would be less than 1 percentage point more persuasive than current models
Source: Hackenburg et al. (PNAS 2025)
Response:Diminishing returns per-message does not address the risk of personalised persuasion at scale - the threat is volume and targeting, not per-message potency
More persuasive models produce less accurate information, suggesting a self-limiting dynamic
Source: Hackenburg et al. (Science, Dec 2025)
Response:Self-limiting only if recipients can detect inaccuracy - persuasion that works precisely because it is convincing may not trigger fact-checking behaviour
Sources (6)
- Primary Source Large Language Models Are More Persuasive Than Incentivized Human Persuaders (2025)81.7% higher probability of increasing agreement vs. human persuaders in conversational settings
- Primary Source The Potential of Generative AI for Personalized Persuasion at Scale - Nature Scientific Reports 2024Personalised ChatGPT messages significantly more influential than non-personalised across domains
- Primary Source Lin et al. - 'Persuading voters using human-artificial intelligence dialogues' (Nature, Dec 2025)2,306 US citizens in pre-registered experiments during 2024 US presidential election. 'Significant treatment effects on candidate preference larger than typically observed from traditional video advertisements'
- Primary Source Scaling Language Model Size Yields Diminishing Returns for Single-Message Political Persuasion - PNAS 2025LLMs match human persuasiveness in political messaging; diminishing returns at scale
- Primary Source Hackenburg et al. - 'The Levers of Political Persuasion with Conversational AI' (Science, Dec 2025)76,977 participants, 19 AI models, 707 political topics. Post-training boosted persuasiveness up to 51%. Critical finding: 'the more persuasive a model is, the less accurate its information tends to be'
- Analysis Simon & Altay - 'Don't Panic (Yet): Assessing AI and Elections' (Knight First Amendment Institute, Columbia, 2025)Counterargument: 'The persuasive effects of political messages are small and likely to remain so.' A 2024 Democratic campaign report noted 'AI never dominated campaigning as predicted.' Lab-to-real-world gap significant