CFP: ICLR 2019 Workshop on Learning with Limited Labeled Data

33 views
Skip to first unread message

Matthew Blaschko

unread,
Feb 25, 2019, 3:21:44 PM2/25/19
to Machine Learning News

-----------------------------------------------------------------------------------------
CFP: ICLR 2019 Workshop on Learning with Limited Labeled Data
Monday May 6th, 2019, New Orleans
https://lld-workshop.github.io
-----------------------------------------------------------------------------------------

Modern representation learning techniques like deep neural networks have had a major impact on a wide range of tasks, achieving new state-of-the-art performances on benchmarks using little or no feature engineering. However, these gains are often difficult to translate into real-world settings because they usually require massive hand-labeled training sets. Collecting such training sets by hand is often infeasible due to the time and expense of labeling data; moreover, hand-labeled training sets are static and must be completely relabeled when real-world modeling goals change.

Increasingly popular approaches for addressing this labeled data scarcity include using weak supervision---higher-level approaches to labeling training data that are cheaper and/or more efficient, such as distant or heuristic supervision, constraints, or noisy labels; multi-task learning, to effectively pool limited supervision signal; data augmentation strategies to express class invariances; and introduction of other forms of structured prior knowledge. An overarching goal of such approaches is to use domain knowledge and data resources provided by subject matter experts, but to solicit it in higher-level, lower-fidelity, or more opportunistic ways.

In this workshop, we examine these increasingly popular and critical techniques in the context of representation learning. While approaches for representation learning in the large labeled sample setting have become increasingly standardized and powerful, the same is not the case in the limited labeled data and/or weakly supervised case. Developing new representation learning techniques that address these challenges is an exciting emerging direction for research. Learned representations have been shown to lead to models robust to noisy inputs, and are an effective way of exploiting unlabeled data and transferring knowledge to new tasks where labeled data is sparse.

In this workshop, we aim to bring together researchers approaching these challenges from a variety of angles. Specifically this includes:

Learning representations to reweight and de-bias weak supervision
Representations to enforce structured prior knowledge (e.g. invariances, logic constraints).
Learning representations for higher-level supervision from subject matter experts
Representations for zero and few shot learning
Representation learning for multi-task learning in the limited labeled setting
Representation learning for data augmentation
Theoretical or empirically observed properties of representations in the above contexts

The second LLD workshop continues the conversation from the 2017 NIPS Workshop on Learning with Limited Labeled Data. Our goal is to once again bring together researchers interested in this growing field. With funding support, we are excited to again organize best paper awards for the most outstanding submitted papers. We also will have seven distinguished and diverse speakers from a range of machine learning perspectives, a panel on where the most promising directions for future research are, and a discussion session on developing new benchmarks and other evaluations for these techniques.

The LLD workshop organizers are also committed to fostering a strong sense of inclusion for all groups at this workshop, and to help this concretely, aside from the paper awards, there will be funding for several travel awards specifically for traditionally underrepresented groups.

Submission Instructions

Please format your papers using the standard ICLR 2019 style files. The page limit is 4 pages (excluding references). Please do not include author information, submissions must be made anonymous. All accepted papers will be presented as posters (poster dimensions: TBC), with exceptional submissions also presented as oral talks.

We are also pleased to announce that our sponsors, Google, LumenAI and SFDS will provide both best paper awards (2 awards of $500 each) and travel support for exceptional submissions.

Important Dates

Submission Deadline: March 15, 2019, 11:59pm GMT+1
Notification of Acceptance: April 5, 2019
Camera-ready Deadline for Accepted Papers: TBA
Workshop: May 6, 2019

Please check the workshop website (https://lld-workshop.github.io) for more details.

Organizers

Isabelle Augenstein (University of Copenhagen)
Stephen Bach (Brown)
Matthew Blaschko (KU Lueven)
Eugene Belilovsky (University of Montreal)
Edouard Oyallon (INRIA)
Anthony Platanios (CMU)
Alex Ratner (Stanford)
Christopher Re (Stanford)
Xiang Ren (University of Southern California)
Paroma Varma (Stanford)

--
Prof. dr. Matthew B. Blaschko
Center for Processing Speech and Images
Department of Electrical Engineering
KU Leuven
Kasteelpark Arenberg 10, bus 2441
3001 Leuven
Belgium

http://homes.esat.kuleuven.be/~mblaschk/

Reply all
Reply to author
Forward
0 new messages