Julyan Arbel - 13/03 -1:30p.m. - Understanding Priors in Bayesian Neural Networks at the Unit Level

3 views
Skip to first unread message

sylvain.lecorff

unread,
Mar 4, 2020, 4:55:16 AM3/4/20
to All About That Bayes
Dear all,

On Friday 13th of March at 1:30p.m., Julyan Arbelresearcher at Inria Grenoble - Rhône-Alpes in the Mistis team (https://www.julyanarbel.com/) will give a talk at CMLA, ENS Paris-Saclay.

The talk will take place in building D'Alembert in room Condorcet (if you cannot access the building, please meet in front of the building at 1:30p.m.).


Title

Understanding Priors in Bayesian Neural Networks at the Unit Level


Abstract

We investigate deep Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. Bayesian neural networks with Gaussian priors are well known to induce an L2, “weight decay”, regularization. Our results characterize a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first layer units are Gaussian, second layer units are sub-exponential, and units in deeper layers are characterized by sub-Weibull distributions. Our results provide new theoretical insight on deep Bayesian neural networks, which we corroborate with simulation experiments.


All relevant information on the seminar may be found here: https://sites.google.com/view/all-about-that-bayes/

Best,
Alain Durmus, Pierre Gloaguen, Julien Stoehr and Sylvain Le Corff
Reply all
Reply to author
Forward
0 new messages