Dear all,
the all team wish you a happy new year! We have a lot of news to share. In February, we will have to seminars
February, 13. 16:00 - Elisabeth Gassiat (Université Paris Saclay). The seminar will take place at SCAIE (Sorbonne Université, Campus Pierre et Marie Curie). It will also be available online via the zoom link.
February 23. 16:00 - Ritabrata Dutta (University of Warwick). The seminar will take place at PariSanté Campus (2 - 10 Rue d'Oradour-sur-Glane, 75015 Paris).
Title: Bayesian Model Averaging with exact inference of likelihood- free Scoring Rule Posteriors
Abstract: A novel application of Bayesian Model Averaging to generative models parameterized with neural networks (GNN) characterized by intractable likelihoods is presented. We leverage a likelihood-free generalized Bayesian inference approach with Scoring Rules. To tackle the challenge of model selection in neural networks, we adopt a continuous shrinkage prior, specifically the horseshoe prior. We introduce an innovative blocked sampling scheme, offering compatibility with both the Boomerang Sampler (a type of piecewise deterministic Markov process sampler) for exact but slower inference and with Stochastic Gradient Langevin Dynamics (SGLD) for faster yet biased posterior inference. This approach serves as a versatile tool bridging the gap between intractable likelihoods and robust Bayesian model selection within the generative modelling framework.
March, 18-22. Master Class on Bayesian Asymptotics by Judith Rousseau (Université Paris Dauphine & University of Oxford)
The masterclass takes place on the PariSanté Campus and consists of morning lectures and afternoon labs. Attendance is free with compulsory
registration before 11 March (since the building is not accessible without prior registration) :
https://forms.office.com/pages/responsepage.aspx?id=3sTngckmMUWwdrcOLXWWbvXLoypfqzlAmVPXTOcay39UOVU4MzdVTFo4UDRESUNXT09JRFdaRlNXTC4uThe plan of the course is as follows
- Part I: Parametric models. In this part, well- and mis-specified models will be considered.
- Asymptotic posterior distribution: asymptotic normality of the posterior, penalization induced by the prior and the Bernstein von – Mises theorem. Regular and nonregular models will be treated.
- marginal likelihood and consistency of Bayes factors/model selection approaches.
- Empirical Bayes methods: asymptotic posterior distribution for parametric empirical Bayes methods.
- Part II: Nonparametric and semiparametric models
- Posterior consistency and posterior convergence rates: statistical loss functions using the theory initiated by L. Schwartz and developed by Ghosal and Van der Vaart, results on less standard or well behaved losses.
- semiparametric Bernstein von Mises theorems.
- nonparametric Bernstein von Mises theorems and Uncertainty quantification.
- Stepping away from pure Bayes approaches: generalized Bayes, one step posteriors and cut posteriors
Best regards,
The All About that Bayes organising team