Good afternoon everyone,
Thomas Renault will present next week (10/6 1:30-2:30pm ET; see blurb below).
Lab meeting will be hybrid, with a zoom link (
https://mit.zoom.us/j/91655448125) and in person in the conference room on the 15th floor of E94 (1579; 245 First Street).
Best regards,
Antonio
---
Title : Is This True? Exploring LLM-Powered Fact-Checking in Online Discussion
Authors : Thomas Renault, Mohsen Mosleh, David Rand
Abstract : We study the use of large language model (LLM) bots for fact-checking on social media by analyzing a large-scale dataset of user interactions on X. We find that fact-checking requests account for over 5 percent of all messages sent to LLM bots, with most of these requests centered on real-time political and geopolitical events. Rising use of LLM-based fact-checking correlates with reduced reliance on Community Notes, suggesting that it not only complements but can also substitute other fact-verification mechanisms. While users affiliated with both major U.S. political parties engage with LLM fact-checking at similar rates, we find that posts from Republican-affiliated users are significantly more likely to be flagged as false—even when falsity is defined in real time by an automated system. Finally, although we observe a high level of internal consistency across LLM responses, we outline ongoing efforts to evaluate the factual accuracy of LLM-generated responses.