Stanford MLSys Seminar Episode 49: Beidi Chen [Th, 1.35-2.30pm PT]

51 views
Skip to first unread message

Karan Goel

unread,
Jan 5, 2022, 11:00:48 AM1/5/22
to stanford-ml...@googlegroups.com
Hi everyone,

We're back with the forty-ninth episode of the MLSys Seminar on Thursday from 1.35-2.30pm PT. 

We'll be joined by Beidi Chen, who will talk about developing sparse techniques for training deep learning models. The format is a 30 minute talk followed by a 30 minute podcast-style discussion, where the live audience can ask questions.

Guests: Beidi Chen
Title: Pixelated Butterfly: Simple and Efficient Sparse Training for Neural Network Models
Abstract: Overparameterized neural networks generalize well but are expensive to train. Ideally, one would like to reduce their computational cost while retaining their generalization benefits. Sparse model training is a simple and promising approach to achieve this, but there remain challenges as existing methods struggle with accuracy loss, slow training runtime, or difficulty in sparsifying all model components. The core problem is that searching for a sparsity mask over a discrete set of sparse matrices is difficult and expensive. To address this, our main insight is to optimize over a continuous superset of sparse matrices with a fixed structure known as products of butterfly matrices. As butterfly matrices are not hardware efficient, we propose simple variants of butterfly (block and flat) to take advantage of modern hardware. Our method (Pixelated Butterfly) uses a simple fixed sparsity pattern based on flat block butterfly and low-rank matrices to sparsify most network layers (e.g., attention, MLP). We empirically validate that Pixelated Butterfly is 3x faster than butterfly and speeds up training to achieve favorable accuracy--efficiency tradeoffs. On the ImageNet classification and WikiText-103 language modeling tasks, our sparse models train up to 2.5x faster than the dense MLP-Mixer, Vision Transformer, and GPT-2 medium with no drop in accuracy.
Bio: Beidi Chen is a Postdoctoral scholar in the Department of Computer Science at Stanford University, working with Dr. Christopher Ré. Her research focuses on large-scale machine learning and deep learning. Specifically, she designs and optimizes randomized algorithms (algorithm-hardware co-design) to accelerate large machine learning systems for real-world problems. Prior to joining Stanford, she received her Ph.D. in the Department of Computer Science at Rice University, advised by Dr. Anshumali Shrivastava. She received a BS in EECS from UC Berkeley in 2015. She has held internships in Microsoft Research, NVIDIA Research, and Amazon AI. Her work has won Best Paper awards at LISA and IISA. She was selected as a Rising Star in EECS by MIT and UIUC.

See you all there!

Best,
Karan

Karan Goel

unread,
Jan 6, 2022, 4:22:55 PM1/6/22
to stanford-ml...@googlegroups.com
Reminder: this is in 10 minutes!
Reply all
Reply to author
Forward
0 new messages