Cross-Cultural Misogynistic Meme Detection Grand Challenge (CC-MMD) 2026
Held in conjunction with the 28th ACM International Conference on Multimodal Interaction (ICMI 2026) Napoli, Italy | October 5–9, 2026
Online misogyny is increasingly evolving into complex multimodal formats. Memes—the fusion of text and imagery—often leverage humor, irony, and cultural shorthand to mask harmful ideologies. However, what is perceived as misogynistic is rarely universal; it is deeply rooted in local norms, language nuances, and social symbolism.
The CC-MMD Grand Challenge benchmarks the next generation of culturally robust multimodal systems. We focus on binary misogyny classification across three distinct regions: Indian, Chinese, and Western (English) contexts. This challenge moves beyond single-pool testing to evaluate how well AI generalizes across specific cultural partitions, ensuring that moderation systems are inclusive and reliable for a global digital population.
Participants are invited to develop multimodal models that can navigate:
Implicit Meaning: Detecting harm when neither the text nor the image is explicitly toxic in isolation.
Cultural Grounding: Interpreting symbols and slang unique to Indian, Chinese, and Western contexts.
Task & Data
Task: Binary classification (Misogynistic vs. Non-Misogynistic) of multimodal memes.
Dataset: A systematic, cross-culturally annotated dataset featuring multilingual text and diverse imagery.
Important Dates
Task Announcement
February 27, 2026
Release of Training Data
February 27, 2026
Release of Test Data
April 1, 2026
Run Submission Deadline
April 20, 2026
Results Declared
May 5, 2026
Paper Submission Deadline
June 10, 2026
Peer Review Notification
July 8, 2026
Camera-ready Paper Due
July 23, 2026