CFP: CVPR 2020 Workshop on Learning from Unlabeled Videos (LUV)

33 views
Skip to first unread message

Yale Song

unread,
Mar 16, 2020, 4:26:32 PM3/16/20
to Machine Learning News

========================================================

                                            Call for papers:

     CVPR 2020 Workshop on Learning from Unlabeled Videos (LUV)

                         https://sites.google.com/view/luv2020

https://cmt3.research.microsoft.com/LUV2020

========================================================


--------------

Overview

--------------


Deep neural networks trained with a large number of labeled images have recently led to breakthroughs in computer vision. However, we have yet to see a similar level of breakthrough in the video domain. Why is this? Should we invest more in supervised learning or do we need a different learning paradigm?


Unlike images, videos contain extra dimensions of information such as motion and sound. Recent approaches leverage such signals to tackle various challenging tasks in an unsupervised/self-supervised setting, e.g., learning to predict certain representations of the future time steps in a video­­ (RGB frame, semantic segmentation map, optical flow, camera motion, and corresponding sound), learning spatio-­temporal progression from image sequences, and learning audio­visual correspondences.


This workshop aims to promote comprehensive discussion around this emerging topic. We invite researchers to share their experiences and knowledge in learning from unlabeled videos and to brainstorm brave new ideas that will potentially generate the next breakthrough in computer vision.



----------

Topics

----------


We invite submissions of 2-4 page extended abstract in topics related to:


- Unsupervised and self-­supervised learning with unlabeled videos

- Video (future frame) prediction and generation

- Cross-­modal self­-supervision

- Sound prediction from video, and vice versa

- Unsupervised visual concept discovery from videos

- Unsupervised visual representation learning

- Learning from noisy web videos

- Learning for actively acquired videos



---------------------------------

Submission Instructions

---------------------------------


All submissions will be handled electronically via the workshop's CMT Website:

https://cmt3.research.microsoft.com/LUV2020


Papers are limited to four pages including references. Please refer to the CVPR style for detailed formatting instructions:

http://cvpr2020.thecvf.com/submission/main-conference/author-guidelines


We accept papers that have been recently published elsewhere or to be presented at CVPR 2020/submitted to ECCV 2020.


Accepted papers will not appear in any proceedings and be considered non-archival. We will ask the authors to publish the paper on the workshop website.



-----------------------

Important Dates

-----------------------


(All deadlines are due by 11:59 p.m. Pacific Standard Time on the listed dates)


- Paper submission: Friday, April 10, 2020

- Notification to authors: Monday, May 11, 2020



-------------------------

Invited Speakers

-------------------------


(In the alphabetic order)


- Alyosha Efros, UC Berkeley

- Ivan Laptev, INRIA

- Jitendra Malik, UC Berkeley / FAIR

- Ming-Yu Liu, NVIDIA Research

- Pierre Sermanet, Google Research

- More to be added



--------------------------------

Program Committee

--------------------------------


(In the alphabetic order)


- Angjoo Kanazawa, UC Berkeley

- Anelia Angelova, Google Research

- Chen Sun, Google Research

- De-An Huang, Stanford University

- Hamed Pirsiavash, UMBC

- Jonghyun Choi, GIST / AI2

- Ruben Villegas, Adobe Research

- Xinlei Chen, FAIR

- Yannis Kalantidis, NAVER LABS Europe

- Yong Jae Lee, UC Davis

- Yusuf Aytar, DeepMind

- Ziwei Liu, CUHK



------------------

Organizers

------------------


- Yale Song, Microsoft Research

- Carl Vondrick, Columbia University

- Katerina Fragkiadaki, Carnegie Mellon University

- Honglak Lee, University of Michigan / Google Research

- Rahul Sukthankar, Google Research

Reply all
Reply to author
Forward
0 new messages