Zero and One tricks for INLA?

Skip to first unread message

Snoop Dogg

May 16, 2022, 4:29:53 PMMay 16
to R-inla discussion group

The Zero and One tricks are used to sample from a model not included in the basic functions in BUGS:

I have a general and simple question about INLA: is it possible to use similar tricks for sampling from the posterior of models not included in the basic functions of INLA?

In other words, if I can implement a log posterior for a model (up to a constant) that is not part of the basic functions of INLA, can I use a Zero or Ones trick to use the INLA machinery in this new model?



May 27, 2022, 12:09:01 AMMay 27
to Snoop Dogg, R-inla discussion group
Hi,sorry for late reply

very often there are many such binary variables in the model, like one
of each observation, hence you have to rely on sampling for those, hence
you can combine INLA with MCMC (there is a paper about this

title = {Markov Chain {M}onte {C}arlo with the {I}ntegrated {N}ested
{L}aplace {A}pproximation},
journal = {Statistics and Computing},
year = 2018,

author={M. O. Berild and S. Martino and V. {Go\'mez-Rubio} and H. Rue},
title={Importance Sampling with the Integrated Nested Laplace
journal = JCGS,
year = 2022,

but the question is if its very efficient computationally... although
somtimes, this would be easy to implement.

You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web, visit

Håvard Rue
Reply all
Reply to author
0 new messages