WING NLP Seminar: Yi Tay (Google AI) / Minh-Thang Luong (Google Brain) // The curious case of self-training: from vision to language and beyond

27 views
Skip to first unread message

Min Yen KAN

unread,
Feb 24, 2022, 1:01:48 AM2/24/22
to Singapore NLP Group, wing...@gmail.com
Dear all:

My group will be hosting two senior research NLP scientists from Google next week.  It will be a hybrid event, so we hope those of you local in Singapore may consider coming locally to NUS for the event (sorry we are not catering though...).  Please do register if you are coming physically to the event which will be held at LT15 (Directions: https://www.streetdirectory.com/api/map/world.cgi?level=13&lon=103.7733560000000000&lat=1.2955005280000000)

Register for physical attendance here (takes only 1 minute; just asks for name and email):


For online only attendance, there is no need to register, please connect via:
11am-12n Talk 1: Yi Tay / Transformer Memory as a Differentiable Search Index
12n-1pm Talk 2: Minh-Thang Luong / The curious case of self-training: from vision to language and beyond

Speaker: Yi Tay
Title: Transformer Memory as a Differentiable Search Index

In this talk, I will discuss our latest work from Google AI, the "differentiable search index" (DSI). DSI demonstrates that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. DSI is a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.

Bio: Yi Tay is a Senior Research Scientist and Tech Lead at Google AI. Yi is mainly a ML/NLP researcher with a keen focus on Transformer models. Yi's research work has earned him the ICLR 2021 best paper award, WSDM 2020  Best paper award (runner-up) and WSDM 2021 Best Paper Award (runner-up). He also sometimes serves as Area Chair or Senior PC for top tier conferences. Before joining Google, Yi earned his PhD from NTU Singapore where he also won the best thesis award. To this date, Yi has published quite a lot of papers but is now more interested in retweets than peer reviewed papers. Homepage: https://vanzytay.github.io/

Speaker: Minh-Thang Luong 
Title: The curious case of self-training: from vision to language and beyond

Abstract: In this talk, I will discuss the story of a classic semi-supervised learning approach, self-training, which has been quite successful lately. The talk starts first with NoisyStudent, a simple self-training method that has advanced state-of-the-art results on vision at the time and yielded surprising improvements on robustness benchmarks. I'll then transition to NLP to talk about STraTA, an approach that combines self-training and task augmentation to achieve strong results in few-shot NLP settings, where only a handful of training examples are available.

Bio: Thang Luong is currently a Staff Research Scientist at Google Brain. He obtained his PhD in Computer Science from Stanford University where he pioneered the development of neural machine translation at both Google and Stanford. Dr. Luong has served as area chairs at ACL and NeuRIPS and is an author of many scientific articles and patents with over 18K citations. He is a co-founder of the Meena Project, now Google LaMDA chatbot, and VietAI, a non-profit organization that builds a community of world-class AI experts in Vietnam.

Cheers,

Min

--
Min-Yen KAN (Dr) :: Associate Professor :: National University of Singapore :: NUS School of Computing, AS6 05-12, 13 Computing Drive
Singapore 117417 :: +65 6516 1885(DID) :: +65 6779 4580 (Fax) :: ka...@comp.nus.edu.sg (E) :: www.comp.nus.edu.sg/~kanmy (W)

Min Yen KAN

unread,
Feb 24, 2022, 1:05:53 AM2/24/22
to Singapore NLP Group, wing...@gmail.com
Sorry, my bad, forgot the most important part; the date!  

Next Tuesday, 1 March @ 11am–1pm.

Announcement updated below.  Feel free to forward to your contacts.  


Cheers,

Min

--
Min-Yen KAN (Dr) :: Associate Professor :: National University of Singapore :: NUS School of Computing, AS6 05-12, 13 Computing Drive
Singapore 117417 :: +65 6516 1885(DID) :: +65 6779 4580 (Fax) :: ka...@comp.nus.edu.sg (E) :: www.comp.nus.edu.sg/~kanmy (W)

Dear all:

My group will be hosting two senior research NLP scientists from Google next week.  It will be a hybrid event, so we hope those of you local in Singapore may consider coming locally to NUS for the event (sorry we are not catering though...).  Please do register if you are coming physically to the event which will be held at LT15 (Directions: https://www.streetdirectory.com/api/map/world.cgi?level=13&lon=103.7733560000000000&lat=1.2955005280000000)

Register for physical attendance here (takes only 1 minute; just asks for name and email):


For online only attendance, there is no need to register, please connect via:
Date: 1 March (Tuesday)

Min Yen KAN

unread,
Mar 2, 2022, 5:20:40 AM3/2/22
to Singapore NLP Group
Hi all, 

Recordings of both seminars are now available via https://wing-nus.github.io/nlp-seminar/

Slides on Speakerdeck and videos on YouTube.

Date: 1 March (Tuesday)
11am-12n Talk 1: Yi Tay / Transformer Memory as a Differentiable Search Index
12n-1pm Talk 2: Minh-Thang Luong / The curious case of self-training: from vision to language and beyond

Speaker: Yi Tay
Title: Transformer Memory as a Differentiable Search Index

In this talk, I will discuss our latest work from Google AI, the "differentiable search index" (DSI). DSI demonstrates that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. DSI is a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.

Bio: Yi Tay is a Senior Research Scientist and Tech Lead at Google AI. Yi is mainly a ML/NLP researcher with a keen focus on Transformer models. Yi's research work has earned him the ICLR 2021 best paper award, WSDM 2020  Best paper award (runner-up) and WSDM 2021 Best Paper Award (runner-up). He also sometimes serves as Area Chair or Senior PC for top tier conferences. Before joining Google, Yi earned his PhD from NTU Singapore where he also won the best thesis award. To this date, Yi has published quite a lot of papers but is now more interested in retweets than peer reviewed papers. Homepage: https://vanzytay.github.io/

-------
Reply all
Reply to author
Forward
0 new messages