Stanford MLSys Seminar Episode 28: Richard Liaw [Th, 1-2pm PT]

33 views
Skip to first unread message

Karan Goel

unread,
Jun 2, 2021, 11:01:25 AM6/2/21
to stanford-ml...@googlegroups.com
Hi everyone,

We're back with the twenty-eighth episode of the MLSys Seminar on Thursday from 1-2pm PT. 

We'll be joined by Richard Liaw, who will talk about solving problems in distributed machine learning. The format is a 30 minute talk followed by a 30 minute podcast-style discussion, where the live audience can ask questions.

Guest: Richard Liaw
Title: Assorted Boring Problems in Distributed Machine Learning
Abstract: Much of the academic focus on “distributing/scaling up machine learning” is synonymous with “training larger supervised ML models like GPT-3 with more and more compute resources”. However, training is only a small part of the ML lifecycle. In this talk, I’ll focus on a couple other machine learning problems that demand a large amount of compute resources, which may be a bit more “boring” but equally (or arguably more!) important. I’ll cover a couple problems that my collaborators and I have previously worked on at UC Berkeley and now at Anyscale: abstractions for scalable reinforcement learning and building RLlib (ICML 18, ICLR 20), distributed hyperparameter tuning and dynamic resource allocation for hyperparameter tuning (SOCC 19, Eurosys 21), and ray as a substrate for the next generation of ML platforms.
Bio: Richard Liaw is an engineer at Anyscale, where he leads a team in building open source machine learning libraries on top of Ray. He is on leave from the PhD program at UC Berkeley, where he worked at the RISELab advised by Ion Stoica, Joseph Gonzalez, and Ken Goldberg. In his time in the PhD program, he was part of the Ray team, building scalable ML libraries on top of Ray.

See you all there!

Best,
Karan

Karan Goel

unread,
Jun 3, 2021, 3:52:33 PM6/3/21
to stanford-ml...@googlegroups.com
Reminder: this is in 10 minutes!
Reply all
Reply to author
Forward
0 new messages