[Call for Papers] FOCUS ECCV 2024 Workshop on FOundation models Creators meet USers

37 views
Skip to first unread message

tomm...@gmail.com

unread,
Jun 13, 2024, 12:47:03 PM (13 days ago) Jun 13
to Machine Learning News
On behalf of the Workshop Organizers

Dear colleagues,

We are excited to inform you that the FOCUS: Foundation Models Creators meet Users workshop will be held on September 30th, 2024, at ECCV 2024 in Milan, Italy.

This workshop presents an excellent opportunity for those engaged in integrating cutting-edge foundation models into robotics pipelines and applications. It will be a platform to present your work, exchange ideas, and discuss future trends.
Please find more details about the workshop and important deadlines below.

Summary
- Workshop Website: https://focus-workshop.github.io/
- CMT Submission Page: https://cmt3.research.microsoft.com/FOCUS2024
- Submission Opening: June 15, 2024
- Submission Deadline: July 15, 2024 (11:59 p.m. CET)
- Author Notification: July 31, 2024
- Camera ready papers Deadline: August 15th, 2024

Workshop Overview
Over the last few years, the field of Artificial Intelligence has witnessed significant growth, largely fueled by the development of large-scale machine learning models, also called foundation models. Such general-purpose solutions often reveal potentials that go beyond what their creators originally envisioned, motivating users to adopt these models for a broad spectrum of applications.
The goal of this workshop is to identify and discuss strategies to assess both positive and negative (possibly unexpected) behaviors in the development and use of foundation models.
 Particular attention will be given to applications that diverge significantly from the scenarios encountered during the training phase of foundational models. These include application-specific visual understanding, uncertainty evaluation, goal-conditioned reasoning, human habits learning, task and motion planning, scene navigation, vision-based manipulation, etc.
Our purpose is to foster an open discussion between foundation model creators and users, targeting the analysis of the most pressing open questions for the two communities and fostering new fruitful collaborations.

Call for Papers
 The workshop calls for submissions addressing, but not limited to, the following topics:
  • New vision-and-language applications
  • Supervised vs unsupervised based foundation model and downstream tasks
  • Zero-shot, Few-shot, continual and life-long learning of foundation model
  • Open set, out-of-distribution detection and uncertainty estimation
  • Perceptual reasoning and decision making: alignment with human intents and modeling
  • Prompt and Visual instruction tuning
  • Novel evaluation schemes and benchmarks
  • Task-specific vs general-purpose models
  • Robustness and generalization
  • Interpretability and explainability
  • Ethics and bias in prompting
Submission guidelines
Papers should be submitted at:
https://cmt3.research.microsoft.com/FOCUS2024

We accept two types of submission:

Full papers: must present original research, not published elsewhere, and follow the ECCV main conference policies and format with a maximum length of 14 pages (extra pages with references only are allowed). Accepted full papers will be included in the ECCV 2024 Workshop proceedings. Supplemental materials are not allowed.

Short papers: previously or concomitantly published works that could foster the workshop objectives. Short-papers will have a maximum length of 4 pages (extra pages with references only are allowed), they will be presented without inclusion in the ECCV 2024 Workshop proceedings. Supplemental materials are not allowed.
The review process is double-blind and there is no rebuttal.

Speakers
Kira Zsolt, Professor, Georgia Tech
Ishan Misra, Director, Facebook AI Research
Hilde Kuehne, University of Bonn

Organizing Committee
Antonio Alliegro, Politecnico di Torino
Francesca Pistilli, Politecnico di Torino
Songyou Peng, ETH Zurich
Biplab Banerjee, IIT Bombay
Gabriela Csurka, Naver Labs Europe
Giuseppe Averta, Politecnico di Torino

Reply all
Reply to author
Forward
0 new messages