CFP for the DHOW-MiLLA: Joint Workshop on Diffusion of Harmful Content on Online Web and Countering Misinformation in the Age of LLMs and Agents

9 views
Skip to first unread message

Thomas Mandl

unread,
Jan 7, 2026, 4:47:50 PMJan 7
to fire...@googlegroups.com

CFP for the DHOW-MiLLA: Joint Workshop on Diffusion of Harmful Content on Online Web and Countering Misinformation in the Age of LLMs and Agents


Submission deadline: January 10, 2026 AOE

Workshop website: https://dhow-workshop.github.io/2026/

Co-located with WWW 2026

Dubai, UAE, April 13-14, 2026


Workshop Description

With the advancement of digital technologies and gadgets, online content is easily accessible. At the same time, harmful content also spreads. There are different harmful content available on different platforms in multiple languages. The topic of harmful content is broad and covers multiple research directions. But from the user’s perspective, they are affected by them all. Often, it is studied individually, like misinformation and hate speech. Research has been done on one platform, monolingual, on a particular issue. It leads to harmful content spreaders switching platforms and languages to reach the user base. Harmful is not limited to social media but also news media. Spreader shares harmful content in posts, news articles, comments, and hyperlinks. So, there is a need to study harmful content by combining cross-platform, language, multimodal data and topics. We will bring the research on harmful content under one umbrella so that research on different topics (hate speech, misinformation, disinformation, self-harm, offensive content, etc.) can bring some novel methods and recommendations for users, leveraging text analysis with image, audio, and video recognition to detect harmful content in diverse formats. The workshop will cover the ongoing issue of war or elections in 2025.


We believe this workshop will provide a unique opportunity for researchers and practitioners to exchange ideas, share the latest developments, and collaborate on addressing the challenges associated with harmful content spread across the Web. We expect that the workshop will generate insights and discussions that will help advance the field of societal artificial intelligence (AI) for the development of a safer internet. In addition to attracting high-quality research contributions to the workshop, one of the aims of the workshop is to mobilise the researchers working on the related areas to form a community.


Submissions Topics

  • Studying different types of harmful content 

  • Improving Factual Reliability in LLMs 

  • Computational fact-checking & Misinformation 

  • Detection Role of Generative AI in Mitigating Harmful Content Harassment, Bullying, and Hate Speech Detection Explainable AI for Harmful Content Analysis 

  • Agentic AI Systems and Misinformation 

  • Detection methods for LLM/VLM-generated text, audio, and imagery 

  • Deepfake and Synthetic Media 

  • Ethical & Societal Implications of AI in Content Moderation 

  • Both Qualitative and Quantitative studies on harmful content

  • Psychological effects of harmful content like mental health 

  • Approaches for data collection or data annotation using multimodal large models on harmful content 

  • User study on the effects of harmful content on human beings 

  • Human-AI Collaboration and Defenses


Submissions

- Submission Instructions: https://dhow-workshop.github.io/2026/#call

- Submission Link: https://openreview.net/group?id=ACM.org/TheWebConf/2026/Workshop/DHOW-MiLLA


Important Dates

Submission deadline: extended to January 7, 2026

Notification of acceptance: January 26, 2026

Camera-ready papers due: February 2, 2026

Workshop date: April 13-14, 2026


Workshop organizers

  • Thomas Mandl, University of Hildesheim, Germany 

  • Haiming Liu, University of Southampton, UK 

  • Gautam Kishore Shahi, University of Duisburg-Essen, Germany 

  • Amit Kumar Jaiswal, Indian Institute of Technology (BHU) Varanasi, India 

  • Durgesh Nandini, University of Bayreuth, Germany 

  • Luis-Daniel Ibáñez, University of Southampton, UK 

  • Junichi Suga, Fujitsu Research, Japan 

  • Dai Yamamoto, Fujitsu Research, Japan 

  • Rahul Mishra, Fujitsu Research, India 

  • Rajakrishnan P Rajkumar, IIIT Hyderabad, India 

  • Sagar Uprety, University College London, UK 

  • Bornali Phukon, University of Illinois Urbana Champaign, USA 

  • Sujit Kumar, Nanyang Technological University, Singapore


Thomas Mandl

unread,
Feb 20, 2026, 2:25:13 PM (6 days ago) Feb 20
to fire...@googlegroups.com

📬 🌐  **Call for Papers: DHOW 2026 – Diffusion of Harmful Content on the Web**  


📍 *Co-located with WebSci 2026 | Braunschweig, Germany | May 26–29, 2026*  


✨ We’re excited to announce the extended CFP for the DHOW 2026 workshop!
Join us in tackling one of the most pressing challenges of our digital age: the **spread of harmful content across platforms, languages, and formats**.  

🔍 Why this matters:
With the rise of AI, social media, and global connectivity, harmful content — from misinformation 📰 and hate speech 🔥 to deepfakes 🎭 and self-harm triggers 🛑 — spreads faster than ever.  
But here’s the catch: researchers often study these issues in isolation — one platform, one language, one type.  
👉 This leads to **"harmful content hopping"** — spreaders move to evade detection.  

🎯 Our mission?
Bring together diverse research under one umbrella 🤝 to:  
- Study **cross-platform, multi-lingual, multimodal** harmful content  
- Combine **text, image, audio, and video analysis** 🖼️🔊🎥  
- Explore the role of **Generative AI** 🤖 and **Explainable AI** 🧠 in detection & defense  
- Understand **psychological impacts** 🧠 and **user experiences** 👥  
- Address urgent topics like **elections, war, and disinformation** in 2025 🌍  

📊 We’re looking for scientific contributions on:
- 📊 Analysis of hate speech, misinformation, disinformation, self-harm, offensive content  
- ✅ Computational fact-checking & AI-driven detection  
- 🤖 Role of Generative AI in mitigating harm  
- 🎭 Deepfakes & their societal impact  
- 🌍 Multi-lingual & cross-platform detection (e.g., spam, bots, trolls)  
- 🧪 Qualitative & quantitative studies on mental health effects  
- 📥 LLM-assisted data collection & annotation  
- 👥 User studies & human-AI collaboration in defense  
- 🧩 Explainable AI for transparency & trust  

📅 Important Dates:
- 📅 Submission deadline: **March 15, 2026 (AOE)**  
- ✅ Notification of acceptance: **March 29, 2026**  
- 📤 Camera-ready due: **April 2, 2026**  
- 🎉 Workshop: **May 26–29, 2026**  

🔗 Submit your work here:
👉 [Submission Portal (OpenReview)](https://openreview.net/group?id=acmmm.org/WebSci/2026/Workshop/DHOW)  
🌐 [Workshop Website](https://dhow-workshop.github.io/2026/)  

👥 We’re building a community!*
This workshop is more than a venue — it’s a **collaborative space** for researchers, practitioners, and policymakers to share insights, spark innovation, and shape a safer, more responsible internet 🌱.  

👨‍💻 Organizing Team:
- Thomas Mandl (University of Hildesheim, Germany)  
- Haiming Liu (University of Southampton, UK)  
- Gautam Kishore Shahi (University of Duisburg-Essen, Germany)  
- Amit Kumar Jaiswal (IIT BHU Varanasi, India)  
- Durgesh Nandini (University of Bayreuth, Germany)  
- Luis-Daniel Ibáñez (University of Southampton, UK)  
 

#DHOW2026 #WebSci2026 #HarmfulContent #AIforGood #SocietalAI #DigitalSafety #ResearchCommunity

Thomas Mandl

unread,
Feb 21, 2026, 5:41:23 AM (5 days ago) Feb 21
to fire...@googlegroups.com

🌐  **Call for Papers: DHOW 2026 – Diffusion of Harmful Content on the Web**  https://dhow-workshop.github.io/2026/2  

🌐 [Workshop Website](🌐 [Workshop Website]( https://dhow-workshop.github.io/2026/2 ))
Reply all
Reply to author
Forward
0 new messages