LightOn Research Workshop #3: The Future of Random Matrices (Friday May 24th, 2019) @IPGG, 6 Rue Jean Calvin, Paris

9 views
Skip to first unread message

Igor Carron

unread,
May 23, 2019, 11:44:19 AM5/23/19
to SMILE in Paris
Hello everyone,

Tomorrow, the day after NeurIPS' deadline, come enjoy LightOn third Research Workshop: The Future of Random Matrices. It'll take place between 1:30pm-4pm, on May 24th, 2019 at IPGG (6 Rue Jean Calvin) Paris, France (to register: https://www.meetup.com/LightOn-meetup/events/260564958/ ). The workshop should be streamed online on Nuit BlancheThe program is currently as follows.

 13:30 Start

Title: Differentially Private Compressive Learning - Large-scale learning with the memory of a goldfish
Abstract: Inspired by compressive sensing, Compressive Statistical Learning allows drastic volume and dimension reduction when learning from large/distributed/streamed data collections. The principle is to exploit random projections to compute a low‐dimensional (nonlinear) sketch (a vector of random empirical generalized moments), in essentially one pass on the training collection. Sketches of controlled size have been shown to capture the information relevant to the certain learning task such as unsupervised clustering, Gaussian mixture modeling or PCA. As a proof of concept, more than a thousand hours of speech recordings can be distilled to a sketch of only a few kilo‐bytes capturing enough information to estimate a Gaussian Mixture Model for speaker verification. The talk will highlight the main features of this framework, including statistical learning guarantees and differential privacy.
Joint work with Antoine Chatalic (IRISA, Rennes), Vincent Schellekens & Laurent Jacques (Univ Louvain, Belgium), Florimond Houssiau & Yves-Alexandre de Montjoye (Imperial College, London, UK), Nicolas Keriven (ENS Paris), Yann Traonmilin (Univ Bordeaux) Gilles Blanchard (IHES)

Title: Scaling-up Large Scale Kernel Learning

15:05 Coffee break and cookies !

“Beyond backpropagation: alternative training methods for neural networks”
Abstract — Backpropagation has long been the de facto choice for training neural networks. Modern paradigms are implicitly optimized for it, and numerous guidelines exist to ensure its proper use. Yet, it is not without flaws: from forbidding effective parallelisation of the backward pass, to a lack of biological realism, issues abound. This has motivated the development of numerous alternative methods, most of which have failed to scale-up past toy problems like MNIST or CIFAR-10.
In this talk, we explore some recently developed training algorithms, and try to explain why they have failed to match the gold standard that is backpropagation. In particular, we focus on feedback alignment methods, and demonstrate a path to a better understanding of their underlying mechanics.

16:10 Fin


Cheers,

Igor.
------------------------
Igor Carron, Ph.D. 
CEO and Co-Founder, LightOn, http://LightOn.io || Linkedin profile || Nuit Blanche, a technical blog || Co-organizer Machine Learning Paris meetup
Reply all
Reply to author
Forward
0 new messages