This Thursday 21-10-2021, 5.30pm CEST, for the ContinualAI Seminar, Lucas Caccia (MILA; FAIR) will present the paper:
Abstract: In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Previous work in this setting often tries to reduce catastrophic forgetting by limiting changes in the space of model parameters. In this work we instead focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones. Starting from a popular approach, experience replay, we consider metric learning based loss functions which, when adjusted to appropriately select negative samples, can effectively nudge the learned representations to be more robust to new future classes. We show that this selection of negatives is in fact critical for reducing representation drift of previously observed data. Motivated by this we further introduce a simple adjustment to the standard cross entropy loss used in prior experience replay that achieves similar effect. Our approach directly improves the performance of experience replay for this setting, obtaining state-of-the-art results on several existing benchmarks in online continual learning, while remaining efficient in both memory and compute.
Please feel free to contact me if you want to speak at one of the next sessions!
Looking forward to seeing you all there!
All the best,
University of California
ContinualAI Co-founding Board Member