[ContinualAI Seminars]: "CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks"

Skip to first unread message

Keiland Cooper

Oct 4, 2022, 2:07:25 AM10/4/22
to Continual Learning & AI News
Hi All,

Join us this Thursday 10-06-2022, 15:30 PM UTC, for the ContinualAI Seminar, where Tejas Srinivasan (University of Southern California) will present the paper:

Title: “CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks”
Link: https://arxiv.org/abs/2206.09059

Abstract: Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.

- YouTube link:

- Microsoft Teams:

- YouTube recordings of the previous sessions:

Feel free to share this email to anyone interested and invite them to subscribe this mailing-list here: https://groups.google.com/g/continualai

Please also contact me if you want to speak at one of the next sessions!

Looking forward to seeing you all there!

All the best,
Keiland Cooper

University of California
ContinualAI Co-founder
Reply all
Reply to author
0 new messages