The proliferation and dissemination of misleading or inaccurate content in the digital ecosystem are among the most significant challenges for contemporary society. The growing amount of content generated and disseminated online, both by humans and by Artificial Intelligence (AI) applications, makes this scenario increasingly complex and articulated.
In recent years, research has shown that disinformation cannot be interpreted exclusively as a problem of false content, but as a phenomenon that emerges from the interaction between social dynamics, the architectures of digital platforms, models of information dissemination, and mechanisms of collective attention.
In this context, generative AI tools can contribute to the production and circulation of misleading content, amplifying narratives, existing biases, and polarization dynamics. At the same time, deeper questions are emerging about the epistemic implications of these systems: the ability of language models to produce highly plausible content can modify the processes through which individuals and institutions evaluate, interpret, and assign credibility to online information. Recent studies have described this transformation as a possible transition towards forms of "epistemia", in which linguistic plausibility tends to progressively replace traditional processes of epistemic verification.
At the same time, AI can offer useful tools both for the identification of problematic content and for the analysis of online information dynamics (for example, by identifying patterns of dissemination, inconsistencies in content, or possible sources of disinformation), thus supporting activities of verification, analysis, and understanding of digital information phenomena. With reference to the dual role of AI, both as a possible amplifying factor and as a tool for analysis and contrast, the objective of this workshop is to critically and interdisciplinarily explore the role of such technologies in different phases of the process of generation, verification, and dissemination of online content.
The topics of interest for the workshop include:
Algorithms and methods for the identification of misinformation and disinformation;
Identification of fabricated or manipulated (multimodal) content, including deepfakes, synthetic audio, and automatically generated texts;
Qualitative and quantitative studies on disinformation phenomena;
Detection of disinformation campaigns;
AI practices and tools supporting fact-checking activities;
Characterization, detection, and analysis of bots;
Analysis and characterization of communities in social networks, including the dynamics of echo chamber formation and the spread of conspiracy theories;
Recommendation systems and their impact on the spread of disinformation;
Ethical and legal aspects related to disinformation.
Scientific contributions (either in Italian or in English) must be submitted through the following portal: submission. If the number of workshop submissions exceeds the time available in the program, presentations from the same research groups may be combined, subject to author agreement. At least one registration is required for each accepted paper.
All accepted contributions will be published on the conference website under a CC BY 4.0 license. Contributions written in English and consisting of at least 5 pages will be published in CEUR-WS (Scopus-indexed), subject to the authors’ consent.
The review process is single-blind and evaluates the relevance of the submission to the workshop. Accepted submissions will be presented as short talks on June 18, 2026. All accepted submissions will be published on the conference website under a CC BY 4.0 license.
Submissions written in English and consisting of at least 5 pages will be published by CEUR-WS (Scopus-indexed), subject to the authors’ consent.
Format: Minimum length 2 pages, maximum 6 pages (including references), using the CEUR-WS template: CEUR-WS Template;
Submission deadline: May 9, 2026.
Matteo Cinelli, Sapienza University of Rome
Gabriella Pasi, University of Milano-Bicocca
Walter Quattrociocchi, Sapienza University of Rome
Marco Viviani, University of Milano-Bicocca