I'm excited to introduce this monthly online
seminar on large language models for code.
We welcome Jacob Andreas from MIT as our first speaker (see the title and abstract below).
Register
here for his talk on December 30, 10 a.m. Eastern Time.
Best,
Nadav
---
Title: Learning to program by learning to read.
Abstract: In the age of deep networks, "learning" almost invariably means "learning from examples". Image classifiers are trained with large datasets of (labeled or unlabeled) images, machine translation systems with corpora of translated sentences, and robot policies with demonstrations. But when human learners acquire new concepts and skills, we often do so with richer supervision, especially in the form of language---we learn new concepts from exemplars accompanied by descriptions or definitions, and new skills from demonstrations accompanied by instructions. In natural language processing, recent years have seen a number of successful approaches to learning from task definitions and other forms of auxiliary language-based supervision. But these successes have been largely confined to tasks that also involve language as an input and an output. What will it take to make language-based training useful for other learning problems? In this talk, I'll present some recent results on using natural language to guide both search and library learning in inductive program synthesis, and discuss connections to the role of language in human concept learning.