OxCSML seminar: Spencer Frei - Learning linear models in-context with transformers

15 views
Skip to first unread message

Hai Dang Dau

unread,
Oct 5, 2023, 11:53:37 AM10/5/23
to oxcsml...@googlegroups.com
Dear all,

Our first OxCSML of the term starts next week. Spencer Frei will talk to us about learning linear models in-context with transformers. Full details are below. Looking forwards to seeing you there.

Kind regards,
Saif and Hai-Dang

==============
Time: 14.00 - 15.00 UK time
Date: Friday 13 Oct 2023
Place: Department of Statistics, University of Oxford. Room LG.03 (Small lecture)
Zoom option: https://zoom.us/j/91237398500?pwd=OXJZUVJkcW4rNTd5YW8zbnVzVTBlUT09

Title: Learning linear models in-context with transformers

Abstract:
Attention-based neural network sequence models such as transformers have the capacity to act as supervised learning algorithms: They can take as input a sequence of labeled examples and output predictions for unlabeled test examples.  Indeed, recent work by Garg et al. has shown that when training GPT2 architectures over random instances of linear regression problems, these models' predictions mimic those of ordinary least squares.  Towards understanding the mechanisms underlying this phenomenon, we investigate the dynamics of in-context learning of linear predictors for a transformer with a single linear self-attention layer trained by gradient flow.  We show that despite the non-convexity of the underlying optimization problem, gradient flow with a random initialization finds a global minimum of the objective function.  Moreover, when given a prompt of labeled examples from a new linear prediction task, the trained transformer achieves small prediction error on unlabeled test examples.  We further characterize the behavior of the trained transformer under distribution shifts.  

Bio:
Spencer Frei is an Assistant Professor of Statistics at UC Davis.  His research is on the foundations of deep learning, including topics related to benign overfitting, implicit regularization, and large language models.  Prior to joining UC Davis he was a postdoctoral fellow at UC Berkeley hosted by Peter Bartlett and Bin Yu.  He was named a Rising Star in Machine Learning by the University of Maryland in 2022 and was a co-organizer of the 2022 Deep Learning Theory Workshop and Summer School at the Simons Institute for the Theory of Computing.  He received his Ph.D in Statistics from UCLA in 2021 under the co-supervision of Quanquan Gu and Ying Nian Wu.
Reply all
Reply to author
Forward
0 new messages