Dear colleagues,
Please find the full program of the next session of the seminar “
All About that Bayes Series” taking place on 13 February 2026!
The seminar is open to everyone, but please confirm your attendance by registering via
this link.Looking forward to seeing many of you there!
Best regards,
Isabelle Albert & Kaniav Kamary
On behalf of the Specialized Group “Bayesian Statistics” of the French Statistical Society (SFdS)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Theme: Bayesian Mixture Model
Date: Friday, 13 February 2026
Time: 2:00 – 5:00 PM
Location: Y. Cauchois Room, Institut Henri Poincaré (
IHP)- Sorbonne Université / CNRS, Paris
Speakers:
Darren Wraith (School of Public Health and Social Work, Center for Data Science, Queensland Institute of Technology)
Title: Efficient adaptive importance sampling in challenging geometric and high dimensional settingsAbstract: A number of adaptive importance sampling approaches are based on proposals
using Gaussian or Student-t mixture distributions which adapt to a target density. In a Bayesian
setting and for problems in cosmology, computational biology, or climate modeling-the
computation of the likelihood can be expensive. Standard mixture based approaches may
generate redundant samples when mixture components overlap, repeatedly evaluating the
likelihood in the same posterior regions without improving the estimate. We examine different
approaches which can be used to reduce likelihood evaluations while maintaining unbiasedness
and low variance. One approach is to use a point-level thinning mechanism that assigns
retention probabilities to drawn samples based on responsibility calculations before the likelihood
evaluation. Points falling in densely covered regions are stochastically discarded, with importance
weights corrected by the inverse retention probability to preserve unbiasedness. We also explore
approaches which can force components to be separate from each other using a repulsive penalty
applied to the objective function. Both methods preserve the key advantages of mixture based
adaptive approaches while dramatically reducing computational cost. We discuss theoretical
properties including unbiasedness under thinning, convergence of the mixture adaptation,
computational complexity, and practical implementation strategies including variance reduction
techniques. The methods are applicable to any setting where likelihood or posterior evaluations
dominate computational cost. We illustrate the performance of the approaches in challenging
geometric and high dimensional settings.