Tara Sainath slides online; next talk: Neural Machine Translation (NMT) by Bart van Merriënboer, 9/23

19 views
Skip to first unread message

Colin Raffel

unread,
Sep 17, 2015, 1:33:31 PM9/17/15
to cu-neu...@googlegroups.com
Hi all, for those interested, slides from Tara Sainath's talk from yesterday are available here.

Our next meeting will be next Wednesday, September 23rd; we're hosting a talk by Bart van Merriënboer from the MILA/LISA lab at University of Montreal.  He'll be talking about the recent work there on using recurrent neural networks and attention-based models for machine translation.  An abstract follows.  Please forward to anyone you think would be interested.  Note that we will be meeting in CEPSR 620 next week.   See you there!


Neural Machine Translation (NMT)
Bart van Merriënboer
Wednesday, September 23rd
4pm, CEPSR 620

The field of machine translation has been dominated for years by n-gram based statistical machine translation models. In 2014 Google and MILA (the Montreal deep learning group) simultaneously introduced end-to-end neural network models to do translation. At WMT15 MILA set the state of the art for English to German translation. I'll talk briefly about traditional statistical machine translation, and how NMT improves on these methods. I'll highlight the differences between Google's sequence-to-sequence models and Montreal's attention-based mechanisms, briefly drawing a parallel to other attention based models such as Jason Weston's memory networks. I'll illustrate the practical implementation of these models with a brief overview and some code snippet from Fuel and Blocks, the data processing and deep learning libraries developed at MILA.

Reply all
Reply to author
Forward
0 new messages