<<Apologies for cross-posting>>
Dear Community,
We are pleased to announce the call for papers for "The Third MetaFood Workshop" at CVPR 2026. See full details below.
---------------------------------------------------------------
3rd MetaFood Workshop
Held in conjunction with CVPR 2026
June 3-4th, Denver, Colorado, USA
----------------------------------------------------------------
CALL FOR PAPERS [DEADLINE EXTENSION]
Food
domain presents uniquely complex visual and physical characteristics -
food appearance deforms, mixes, and changes through manipulation and
consumption. Thus, it provides a perfect test bed for powerful Computer
Vision and Deep Learning algorithms. The MetaFood Workshop (MTF) 2026
invites the CVPR community to explore food data analysis and interaction
as a new frontier for embodied perception, video generation, and
physics-aware modeling.
Understanding how food interacts with
tools and humans enables fine-grained video reasoning, not only for
estimating how much is eaten, but also for revealing intricate
multi-material dynamics during cooking and eating. By bridging embodied
AI, dynamic 3D reconstruction, vision-language reasoning, and generative
modeling, MetaFood 2026 aims to advance physically grounded,
fine-grained understanding and synthesis of food in motion.
MetaFood’26 will encompass a broad range of topics, including but not limited to:
- Embodied and causal understanding of food manipulation and consumption
- Physics-informed understanding and 3D reconstruction of deformable, fragile, and multi-material food
- Temporal modeling of food transformations and continuous state estimation (e.g., cooking or eating )
- Vision–language reasoning, in-context learning, and retrieval-augmented generation for food
- Multimodal learning across images, videos, audio, and structured/unstructured text
- Self-supervised, continual, semi-supervised, and weakly supervised learning for in-the-wild food data
- Uncertainty modeling and learning from noisy or ambiguous labels
- Food portion, volume, and nutrition estimation
- Food image and video generation using generative AI
- 2D/3D classification, detection, and segmentation of food items and ingredients
We welcome original research that presents novel applications, innovative algorithms,
or
critical analyses addressing these challenges. Join us at the MetaFood
Workshop to explore how computer vision can revolutionize food
understanding and contribute to solving real-world challenges in food
computing.
Open review Registration: 13th March, 2026 (11:59 PM AOE)
Paper submission deadline: 15th March 2026 (11:59pm AOE)
Supplementary material deadline: 15th March 2026 (11:59pm AOE)
Acceptance Notification: March 25, 2026
Openreview Submission website:
The
Best Paper will be selected in Mid April and will receive a
complimentary full conference registration, generously donated by Prof.
Dima Damen, University of Bristol.
We look forward to your contributions.
Regards,
Third MetaFood Workshop - Organizing Team