Two Thesis Proposals tomorrow morning: (1) Cross-Lingual Parsing, (2) Kernel Approximation for Speech

50 views
Skip to first unread message

Avner May

unread,
Nov 9, 2016, 2:44:23 PM11/9/16
to mlcol...@googlegroups.com, cu-neu...@googlegroups.com
Hi all,

Tomorrow morning, Mohammad Rasooli and I, who both work with Professor Michael Collins, will be doing our thesis proposals.

Time/Place
- CS Conference Room (CSB 453).
- 10 am:  Mohammad Rasooli's Proposal  (Advances in Cross-Lingual Syntactic Transfer)
- 12 pm:  Avner May's Proposal (Kernel Approximation Methods for Acoustic Modeling)

Full abstracts are included below.

We hope some of you can make it!
Avner + Mohammad

========================
10 am: Mohammad Rasooli Thesis Proposal
Title: Advances in Cross-Lingual Syntactic Transfer
Abstract: Transfer methods have been shown to be effective alternatives for developing accurate dependency parsers in the absence of annotated treebanks in the target language of interest. They are generally divided into two approaches: annotation projection from translation data using supervised parsers in resource-rich languages and direct transfer from resource-rich treebanks. In this proposal, we review our past work on improving over both of the approaches by applying simple and scalable machine learning methods. For our ongoing and future work, we propose to use a more sophisticated parsing method based on neural networks that can selectively learn trees in other treebanks and projected trees:

    * We propose to learn the relevance of each training sentence in order to be able to learn to trust which training sentences are more trustworthy to be trained on. 
    * We  propose to transfer noun-phrase information to guide the parser. 
    * We propose to apply the relevance learning method on other natural language processing tasks such as part-of-speech tagging and semantic role labeling. 

12 pm: Avner May Thesis Proposal
Title:  Kernel Approximation Methods for Acoustic Modeling
Abstract: In this proposal, we discuss our work on scaling kernel methods to acoustic model training for automatic speech recognition (ASR).  In order to circumvent the quadratic time complexity of traditional kernel methods, we employ the kernel approximation approach of Rahimi and Recht.  We run extensive experiments on several challenging ASR datasets, and compare performance with deep neural networks (DNNs).  In order to reduce the number of random features required for the kernel methods to attain strong performance, we propose an efficient feature selection algorithm, which leads to significant improvements, while also making the models more compact.  For future work, we propose exploring and developing metric learning techniques for this task, with the hope of developing kernel functions better tuned for acoustic modeling.
Reply all
Reply to author
Forward
0 new messages