Stanford MLSys Seminar Episode 54: Ellie Pavlick [Th, 1.35-2.30pm PT]

39 views
Skip to first unread message

Karan Goel

unread,
Feb 9, 2022, 7:19:26 PM2/9/22
to stanford-ml...@googlegroups.com
Hi everyone,

We're back with the fifty-fourth episode of the MLSys Seminar on Thursday from 1.35-2.30pm PT. 

We'll be joined by Ellie Pavlick, who will talk about symbolic analyses of neural network representations . The format is a 30 minute talk followed by a 30 minute podcast-style discussion, where the live audience can ask questions.

Guests: Ellie Pavlick
Title: Implementing Symbols and Rules with Neural Networks
Abstract: Many aspects of human language and reasoning are well explained in terms of symbols and rules. However, state-of-the-art computational models are based on large neural networks which lack explicit symbolic representations of the type frequently used in cognitive theories. One response has been the development of neuro-symbolic models which introduce explicit representations of symbols into neural network architectures or loss functions. In terms of Marr's levels of analysis, such approaches achieve symbolic reasoning at the computational level ("what the system does and why") by introducing symbols and rules at the implementation and algorithmic levels. In this talk, I will consider an alternative: can neural networks (without any explicit symbolic components) nonetheless implement symbolic reasoning at the computational level? I will describe several diagnostic tests of "symbolic" and "rule-governed" behavior and use these tests to analyze neural models of visual and language processing. Our results show that on many counts, neural models appear to encode symbol-like concepts (e.g., conceptual representations that are abstract, systematic, and modular), but not perfectly so. Analysis of the failure cases reveals that future work is needed on methodological tools for analyzing neural networks, as well as refinement of models of hybrid neuro-symbolic reasoning in humans, in order to determine whether neural networks' deviations from the symbolic paradigm are a feature or a bug.
Bio: Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to "reverse engineer" the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google.

See you all there!

Best,
Karan

Karan Goel

unread,
Feb 10, 2022, 4:21:34 PM2/10/22
to stanford-ml...@googlegroups.com
Reminder: we're starting in 15 minutes!
Reply all
Reply to author
Forward
0 new messages