[ContinualAI Reading group]: "Rehearsal revealed: The limits and merits of revisiting samples in continual learning"

7 views
Skip to first unread message

Keiland Cooper

unread,
May 4, 2021, 7:42:48 AM5/4/21
to Continual Learning & AI News
Hi All,

This Friday 07-05-2021, 5.30pm CEST, for the ContinualAI Reading Group, Eli Verwimp  will present the paper:



Abstract: Learning from non-stationary data streams and overcoming catastrophic forgetting still poses a serious challenge for machine learning research. Rather than aiming to improve state-of-the-art, in this work we provide insight into the limits and merits of rehearsal, one of continual learning's most established methods. We hypothesize that models trained sequentially with rehearsal tend to stay in the same low-loss region after a task has finished, but are at risk of overfitting on its sample memory, hence harming generalization. We provide both conceptual and strong empirical evidence on three benchmarks for both behaviors, bringing novel insights into the dynamics of rehearsal and continual learning in general. Finally, we interpret important continual learning works in the light of our findings, allowing for a deeper understanding of their successes.

The event will be moderated by: Vincenzo Lomonaco


---------------
- Eventbrite event (to save it in you calendar and get reminders): Click Here 
- Microsoft Teams: click here to join 
- YouTube recordings of the previous sessions: https://www.youtube.com/c/ContinualAI
---------------

Feel free to share this email to anyone interested and invite them to subscribe this mailing-list here: https://groups.google.com/g/continualai 

Please also contact us if you want to speak at one of the next sessions!

Looking forward to seeing you all there!

All the best,
Keiland Cooper

University of California
ContinualAI Co-founding Board Member
Reply all
Reply to author
Forward
0 new messages