Last CFP: ICML 2018 Workshop on Modern Trends in Nonconvex Optimization for Machine Learning

20 views
Skip to first unread message

Chi Jin

unread,
May 20, 2018, 3:54:15 AM5/20/18
to Machine Learning News
Modern Trends in Nonconvex Optimization for Machine Learning
ICML Workshop, Stockholm, Sweden, July 14-15, 2018

Website: https://sites.google.com/view/icml2018nonconvex/

We welcome paper submissions to the workshop on "Modern Trends in Nonconvex Optimization for Machine Learning" at ICML 2018.
This year, ICML workshops will be held as a part of the Federated Artificial Intelligence Meeting (FAIM).
Topics of interest include but are not limited to:
* Landscape analysis/design of nonconvex models
* New nonconvex algorithms
* Novel analysis/understanding of nonconvex methods
* Robust optimization
* Generalization performance of nonconvex methods
* Application of nonconvex optimization to diverse application domains such as vision, natural language processing, reinforcement learning, social networks, health informatics.

Submissions will be accepted as poster and (or) spotlight presentations.

IMPORTANT DATES:
* Submission deadline: May 22, 2018 (23:59 PDT)
* Acceptance notification: May 29, 2018

SUBMISSION INSTRUCTIONS:
All submissions must be in PDF format using the ICML style.
* The length is limited to at most 4 pages, excluding references.
* The submissions may include an optional supplementary appendix.
* The submissions should follow double blind policy and dual submission policy in ICML.

Formal submissions should be sent via Email to icml2018...@gmail.com before deadline.

An award (sponsored by Google) will be given to the best paper.

OVERVIEW:
Nonconvex optimization has become a core topic in modern machine learning (ML). A wide variety of ML models and subfields leverage nonconvex optimization, including deep learning, reinforcement learning, matrix/tensor factorization models, and probabilistic (Bayesian) models. Classically, nonconvex optimization was widely believed to be intractable due to worst-case complexity results. However, recently the community has seen rapid progress in both the empirical training of nonconvex models and the development of their theoretical understanding.

Advances on the theoretical side range from understanding the landscape of various nonconvex models to efficient algorithms in the offline, stochastic, parallel and distributed settings utilizing zeroth, first, or second-order information. Recent guarantees not only ensure finding stationary point (points where the gradient vanishes), but also attack problems raised by spurious local minima and saddle points (locally and globally). In parallel, the field has also witnessed significant progress driven by practitioners. Novel nonconvex models such as residual networks and LSTMs, as well as methods such as batch normalization and ADAM for accelerating their training, have become state-of-the-art empirical methods.

This workshop will bring together experts in machine learning, artificial intelligence, and optimization to tackle some of the conceptual and practical bottlenecks that are hindering progress and to explore new directions. Examples include (but are not limited to) implicit regularization, landscape design, homotopy methods, adaptive algorithms and robust optimization. The workshop hopes to facilitate cross-domain discussion and debate on topics such as these and to reshape this rapidly progressing field.

INVITED SPEAKERS:
* Yoshua Bengio (U Montreal)
* Coralia Cartis (Oxford)
* Dmitriy Drusvyatskiy (U Washington)
* Elad Hazan (Princeton)
* Sham Kakade (U Washington)
* Sergey Levine (UC Berkeley)
* Suvrit Sra (MIT)

ORGANIZING COMMITTEE:
* Anima Anandkumar (Caltech)
* Leon Bottou (Facebook)
* Chi Jin (UC Berkeley)
* Michael I. Jordan (UC Berkeley)
* Hossein Mobahi (Google)
* Katya Scheinberg (Lehigh)

Reply all
Reply to author
Forward
0 new messages