OxCSML seminar this week: Antonio Orvieto on Friday 2-3pm

17 views
Skip to first unread message

Hai Dang Dau

unread,
Nov 28, 2023, 2:10:11 AM11/28/23
to oxcsml...@googlegroups.com


---------- Forwarded message ---------
From: Yee Whye Teh <y.w...@stats.ox.ac.uk>
Date: Mon, Nov 27, 2023 at 11:24 PM
Subject: [ML] OxCSML seminar: Antonio Orvieto on Friday 2-3pm
To: m...@maillist.ox.ac.uk <m...@maillist.ox.ac.uk>, oxc...@googlegroups.com <oxc...@googlegroups.com>


This Friday we will have Antonio Orvieto visiting and giving the OxCSML seminar at 2-3pm. He would be happy to meet with individuals here in Oxford on Friday. Please let me know if you are interested in meeting him and let me know when you are available to meet him.


OxCSML Seminar

Dec 1, Friday 2-3pm

Small lecture theatre, Department of Statistics, and on zoom


Speaker: Antonio Orvieto, ELLIS Institute Tübingen


Title: Long-range reasoning on graphs without attention


Abstract: Graph neural networks based on iterative one-hop message-passing have been shown to struggle in harnessing information from distant nodes effectively. Conversely, graph transformers allow each node to attend to all other nodes directly, but suffer from high computational complexity and have to rely on ad-hoc positional encodings to bake in the graph inductive bias. In this talk, we present a new architecture to reconcile these challenges. Our approach stems from the recent breakthroughs in long-range modeling provided by deep state-space models on sequential data (S4, LRU, etc..). For a given target node, our model aggregates nodes at different distances and uses a parallelizable linear recurrent unit (LRU) over the chain of distances to provide a natural encoding of its neighborhood structure. With no need for positional encoding, we empirically show that the performance of our model is competitive compared with that of state-of-the-art graph transformers on various benchmarks, at a drastically reduced computational complexity. In addition, we show that our model is theoretically more expressive than one-hop message-passing neural networks.

 

Zoom: https://zoom.us/j/91238172693?pwd=dzhIQUx1MG9XZHk1R3QvbUZrYXRGUT09




ATT00001.txt
Reply all
Reply to author
Forward
0 new messages