[SING] 01 July 2021 10:00-11:00am: Jacob Andreas [MIT] / Implicit Representations of Meaning in Neural Language Models

7 views
Skip to first unread message

Pan, Liangming

unread,
Jun 27, 2021, 10:52:52 PM6/27/21
to si...@wing.comp.nus.edu.sg
Dear Singapore NLP interest groups: 

Here is the information for the 6-th talk of the WING-NUS NLP Seminar, a series of invited talks over this summer made by current rising stars in NLP.  The seminar website is at https://wing-nus.github.io/nlp-seminar/. Please join us at the Zoom address if you're interested.  The talk and slides may be made available on the website but please do join us to avoid disappointment.

The talk is open to all, so feel free to circulate to others you might think are interested. 

WING-NUS NLP Seminar 2021 - Talk 6
   
Title: Implicit Representations of Meaning in Neural Language Models
Speaker: Jacob Andreas
Assistant Professor
EECS and CSAIL at Massachusetts Institute of Technology (MIT)

Date/Time: 01 July 2021, Thursday, 10:00 AM to 11:00 AM
Venue: Join Zoom Meeting
http://bit.ly/knmnyn-zoom-nus
ZOOM Room ID: 770 447 8736, PIN: 3244
Chaired by: A/P Min-Yen Kan, School of Computing
(ka...@comp.nus.edu.sg)
   
ABSTRACT:
Neural language models, which place probability distributions over sequences of words, produce vector representations of words and sentences that are useful for language processing tasks as diverse as machine translation, question answering, and image captioning. These models’ usefulness is partially explained by the fact that their representations robustly encode lexical and syntactic information. But the extent to which language model training also induces representations of meaning remains a topic of ongoing debate. I will describe recent work showing that language models—trained on text alone, without any kind of grounded supervision—build structured meaning representations that are used to simulate entities and situations as they evolve over the course of a discourse. These representations can be linearly decoded into logical representations of world state (e.g. discourse representation structures). They can also be directly manipulated to produce predictable changes in generated output. Together, these results suggest that (some) highly structured aspects of meaning can be recovered by relatively unstructured models trained on corpus data.

BIO-DATA:
Jacob Andreas is the X Consortium Assistant Professor at MIT. His research focuses on building intelligent systems that can communicate effectively using language and learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been the recipient of a Sony Faculty Innovation Award, an MIT Kolokotrones teaching award, and paper awards at NAACL and ICML.


Please contact me if you have any questions. Contact info: 

Liangming Pan
Web Information Retrieval / Natural Language Processing Group (WING)
National University of Singapore

Thanks very much. Looking forward to your participation.

Thanks, 
Liangming


Liangming Pan (潘亮铭)

Web Information Retrieval / Natural Language Processing Group (WING)
National University of Singapore

Reply all
Reply to author
Forward
0 new messages