Durk Kingma, who is a PhD student with Max Welling at the University of Amsterdam is visiting Columbia. He has graciously agreed to give a last moment talk tomorrow. Durk has made some impressive contributions to the field of stochastic gradient based variational inference. Should be a very interesting talk!
Details are below.
Time: Wednesday, 10am
Location: 750 CESPR at Columbia University
Title: Efficient Inference and Learning with Intractable Posteriors? Yes, Please.
Abstract:
We discuss a number of recent advances in Stochastic Gradient Variational Inference (SGVI).
- Blending ideas from variational inference, deep learning and stochastic optimization, we derive an algorithm for efficient gradient-based inference and learning with intractable posteriors.
- Applied to deep latent-variable models with neural networks as components, this results in the Variational Auto-Encoder (VAE), a principled Bayesian auto-encoder. We show that VAEs can be useful for semi-supervised learning and analogic reasoning.
- Further improvements are realized through a new variational bound with auxiliary variables. Markov Chain Monte Carlo (MCMC) can be cast as variational inference with auxiliary variables; this interpretation allows principled optimization of MCMC parameters to greatly improve MCMC efficiency.
- When applying SGVI to global parameters, we show how an order of magnitude of variance reduction can be achieved through local reparameterization while retaining parallelizability. Gaussian Dropout can be cast as a special case of such SGVI with a scale-free prior. This variational interpretation of dropout allows for simple optimization of dropout rates.