MLSys Seminar Episode 67: Tri Dao [Wd, 3:30-4:20pm PT]

86 views
Skip to first unread message

Dan Fu

unread,
Jan 10, 2023, 11:01:32 AM1/10/23
to stanford-ml...@googlegroups.com
Hi everyone,

After a few months' hiatus, the MLSys Seminar is coming back this quarter! We'll be running a special foundation models limited series, in partnership with CS 324. Our first episode  (number sixty-seven) streams this Wednesday from 3:30-4:20pm PT. 

We'll be hearing from Tri Dao, who will be talking about fast and memory-efficient attention with FlashAttention. The format is a 20 minute talk followed by a 30 minute podcast-style discussion, where the live audience can ask questions.

Guest: Tri Dao
Title: FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Abstract: Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware -- accounting for reads and writes between levels of GPU memory. We propose FlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of FlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes. We also extend FlashAttention to block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method. FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3× speedup on GPT-2 (seq. length 1K), and 2.4× speedup on long-range arena (seq. length 1K-4K). FlashAttention and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).

This work received the Best Paper Award at the Hardware-Aware Efficient Training Workshop at ICML, 2022. FlashAttention is now widely used in some of the largest research labs and companies, in just 6 months after its release.


Bio: Tri Dao is a PhD student in Computer Science at Stanford, co-advised by Christopher Ré and Stefano Ermon. He works at the interface of machine learning and systems, and his research interests include sequence models with long-range memory and structured matrices for compact deep learning models. His work has received the ICML 2022 Outstanding paper runner-up award.

See you all there!

Best,
Dan

Dan Fu

unread,
Jan 11, 2023, 6:46:42 PM1/11/23
to stanford-ml...@googlegroups.com

--
You received this message because you are subscribed to the Google Groups "Stanford MLSys Seminars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stanford-mlsys-se...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stanford-mlsys-seminars/CACNDft7WNfNefHhKUmAJEj3SQ--TJ5%3DXPt54eFgsxbMPO8nBVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--

--


Dan Fu

PhD Candidate in Computer Science

Stanford University

da...@cs.stanford.edu / @realDanFu

www.danfu.org

Reply all
Reply to author
Forward
0 new messages