MLSys Seminar Episodes 78 + 79: Yejin Choi, Jared Kaplan [Mon, Wd 3:30-4:20 pm PT]

346 views
Skip to first unread message

Dan Fu

unread,
Mar 5, 2023, 8:33:29 PM3/5/23
to stanford-ml...@googlegroups.com, cs-se...@lists.stanford.edu, ai-...@cs.stanford.edu, stanf...@googlegroups.com, dawn-i...@lists.stanford.edu
Hi everyone,

This week we'll have two episodes of the MLSys Seminar -- Monday 3:30-4:20pm PT, and Wednesday 3:30-4:20pm PT.

Monday will be Yejin Choi from UW, and Wednesday will be Jared Kaplan from Anthropic!

Livestream links:
Monday: https://www.youtube.com/watch?v=n4HakBqoCVg
Wednesday: https://www.youtube.com/watch?v=fqC3D-zNJUM
Talk details below!

Yejin Choi
Title: Common Sense: the Dark Matter of Language and Intelligence
Abstract: Scale appears to be the winning recipe in today's leaderboards. And yet, extreme-scale neural models are (un)surprisingly brittle and make errors that are often nonsensical and even counterintuitive. In this talk, I will argue for the importance of knowledge, especially commonsense knowledge, as well as inference-time reasoning algorithms, and demonstrate how smaller models developed in academia can still have an edge over larger industry-scale models, if powered with knowledge and/or reasoning algorithms.

Bio: Yejin Choi is Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Her research investigates a wide variety of problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is a MacArthur Fellow and a co-recipient of the NAACL Best Paper Award in 2022, the ICML Outstanding Paper Award in 2022, the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, the NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.

Jared Kaplan
Title: AI Safety, RLHF, and Self-Supervision

Bio: Jared Kaplan is a co-founder of Anthropic and a professor at Johns Hopkins University.  He spent the first 15 years of his career as a theoretical physicist before moving to work on AI, where his contributions include research on scaling laws in machine learning, GPT-3, Codex, and more recently AI safety work such as RLHF for helpful and harmless language assistants and Constitutional AI.

See you all there!

Best, Dan

Dan Fu

unread,
Mar 6, 2023, 6:21:40 PM3/6/23
to stanford-ml...@googlegroups.com, cs-se...@lists.stanford.edu, ai-...@cs.stanford.edu, stanf...@googlegroups.com, dawn-i...@lists.stanford.edu
We're live with Yejin in 10 minutes!

Dan

Dan Fu

unread,
Mar 8, 2023, 6:20:10 PM3/8/23
to stanford-ml...@googlegroups.com, cs-se...@lists.stanford.edu, ai-...@cs.stanford.edu, stanf...@googlegroups.com, dawn-i...@lists.stanford.edu
We're live with Jared in 10 minutes!

Dan


On Sun, Mar 5, 2023 at 5:33 PM Dan Fu <da...@cs.stanford.edu> wrote:
Reply all
Reply to author
Forward
0 new messages