------------------------------------------------
### CALL FOR PAPERS ###
ICML 2019 Workshop on "Understanding and Improving Generalization in Deep Learning". June 14th or 15th, 2019, Long Beach, California, USA. Submissions will be accepted as posters and (or) spotlight presentations.
### WORKSHOP WEBSITE ###
### KEY DATES ###
Submission Deadline: May 23rd 11:59PM (PST)
Author Notification: June 6th (earlier notification is possible if requested when submitting; email workshop chairs)
Final Submission: June 10th
Workshop Date: June 14th or 15th
Early notification is possible for early submissions upon request
### INVITED SPEAKERS ###
● Mikhail Belkin (OSU)
● Chelsea Finn (Berkeley, Google)
● Sham Kakade (UW)
● Jason Lee (USC)
● Aleksander Mądry (MIT)
● Daniel Roy (UToronto)
### TOPICS ###
Topics of interest in this workshop include but are not limited to:
● Implicit and explicit regularization, and the role of optimization algorithms in generalization
● Architecture choices that improve generalization
● Empirical approaches to understanding generalization
● Generalization bounds and empirical criteria to evaluate generalization bounds
● Robustness: generalizing to distributional shift a.k.a dataset shift
● Generalization in the context of representation/unsupervised learning, transfer learning and reinforcement learning: definitions and empirical approaches
### SUBMISSION INSTRUCTIONS ###
Submissions must be made through workshop's EasyChair page. All submissions must be in ICML's official PDF format, but using workshop’s style file (available on the workshop's website). The submission length is limited to at most 4 pages, excluding references. The submissions may include an optional supplementary appendix. The submissions should follow double blind policy and not published before under peer reviewed conferences. Additional details on formatting and submission website are available at https://sites.google.com/view/icml2019-generalization .
### WORKSHOP DESCRIPTION ###
Generalization is a cornerstone of machine learning, and one of the keys to its practical success. Deep networks generalize well in supervised learning tasks even with over-parameterization, and this is a reason for their enormous impact. In spite of recent research efforts in this direction, the problem of understanding generalization remains far from solved.
In the most basic context of deep supervised learning, generalization is the gap between error on the training and test set drawn from the same distribution. Current research challenges include understanding the data-dependency of the gap, the role of increasing network depth, and the role of implicit and explicit regularization.
The problem becomes harder when test and train distributions differ. A mathematically well-defined setup is that of adversarial examples which has seen a flurry of recent research. When the test distribution shifts even more (large norms in input space), such as for domain adaptation problems, even the mathematical definition of generalization still eludes the community. An interesting research question is what inductive biases in our current models cause them to be sensitive to perturbations, and designing better biases.
Going beyond supervised learning, the formulation and measurement of generalization in the context of deep unsupervised and self-supervised learning, transfer learning and reinforcement learning is gaining momentum. However, well-accepted definitions and empirical practice are still wide-open research questions.
In this workshop, we bring together prominent researchers in generalization theory and practice to discuss the current state of the art and promising future research directions in all areas of deep learning.
### ORGANIZING COMMITTEE ###
● Behnam Neyshabur (NYU)
● Hossein Mobahi (Google)
● Dilip Krishnan (Google)
● Peter Bartlett (Berkeley)
● Nati Srebro (TTIC, Google)
● Dawn Song (Berkeley)