AI political persuasion papers in Nature+Science

8 views
Skip to first unread message

David Rand

unread,
Dec 4, 2025, 7:01:02 PM12/4/25
to Human Cooperation Lab
Congrats all!


Persuading Voters using Human-AI Dialogues

Hause Lin, Gabriela Czarnek, Benjamin Lewis, Joshua P. White, Adam J. Berinsky,
Thomas Costello, Gordon Pennycook*, & David G. Rand*

*Corresponding authors: gordon.p...@cornell.edu, dg...@cornell.edu

There is great public concern regarding the potential use of generative AI for political persuasion and the resulting impacts on elections and democracy. We inform these concerns using preregistered experiments to assess large language models’ ability to influence voters’ attitudes. In the context of the 2024 U.S. presidential election, the 2025 Canadian federal election, and the 2025 Polish presidential election, we randomly assigned participants to have a conversation with an AI model that advocated for one of the top two candidates. We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video ads. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies used by the models suggests they persuade with relevant facts and evidence, rather than employing sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AIs advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections.



The levers of political persuasion with conversational artificial intelligence

Kobi Hackenburg*†, Ben M. Tappin*†, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand*, Christopher Summerfield*

*Corresponding authors: kobi.ha...@oii.ox.ac.uk (K.H.); b.ta...@lse.ac.uk (B.M.T.); dg...@cornell.edu (D.G.R.); christopher...@psy.ox.ac.uk (C.S.)

There are widespread fears that conversational artificial intelligence (AI) could soon exert unprecedented influence over human beliefs. In this work, in three large-scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)—including some post-trained explicitly for persuasion—to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods—which boosted persuasiveness by as much as 51 and
27%, respectively—than from personalization or increasing model scale, which had smaller effects. We further show that these methods increased persuasion by exploiting LLMs’ ability to rapidly access and strategically deploy information and that, notably, where they increased AI persuasiveness, they also systematically decreased factual accuracy.

--
David G. Rand (he/him)
Information Science, Marketing, and Psychology
Cornell University
Reply all
Reply to author
Forward
0 new messages