ContinualAI Reading Group: Catastrophic Forgetting in Deep Graph Networks

12 views
Skip to first unread message

Keiland Cooper

unread,
Apr 6, 2021, 6:45:06 AM4/6/21
to Continual Learning & AI News
Hi All,

This Friday 09-04-2021, 5.30pm CEST, for the ContinualAI Reading Group, Federico Errica (University of Pisa) will present the paper:


Abstract: In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments.

The event will be moderated by: Vincenzo Lomonaco


---------------
- Eventbrite event (to save it in you calendar and get reminders): Click Here 
- Microsoft Teams: click here to join 
- YouTube recordings of the previous sessions: https://www.youtube.com/c/ContinualAI
---------------

Feel free to share this email to anyone interested and invite them to subscribe this mailing-list here: https://groups.google.com/g/continualai 

Please also contact me if you want to speak at one of the next sessions!

Looking forward to seeing you all there!

All the best,
Keiland Cooper

University of California
ContinualAI Co-founding Board Member
Reply all
Reply to author
Forward
0 new messages