Discrepancy between posteriors from INLA and brms

100 views
Skip to first unread message

Klaus Oberauer

unread,
Mar 9, 2026, 6:49:41 AMMar 9
to R-inla discussion group
Hi,
I'm new to INLA and wanted to test whether it produces the same results as brms. I simulated data from a linear model with 2 predictors (one between, one within subjects) and applied brms and INLA with the same priors. The two methods produce posteriors with matching means but the SD of the posteriors for the within-subjects predictor and the interaction was much narrower from INLA than brms. Any ideas why that is? 
I attach the R script in case this is useful.  
All the best
Klaus

LinearModel.INLA.brms.R

Elias T. Krainski

unread,
Mar 9, 2026, 7:15:36 AMMar 9
to R-inla discussion group
Hi, 

You are not fitting the same model. In brm(dv ~ iv1 * iv2 + (1 + iv1 || subj)), you specify a random intercept and a random slope, whereas with INLA you specify only a random intercept. 

You do need a f() term for that, something like f(subj_cp, iv1, ...) where subj_cp is the same as subj (as INLA needs unique index names). 

BTW: You have 'SD' in both lines 38 and 40 (one may be "Sd" instead?), and it seems that you didn't used interPrior in the brm.

Elias

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/r-inla-discussion-group/62f11414-279e-49b4-a2b5-98b72b27af03n%40googlegroups.com.

Klaus Oberauer

unread,
Mar 9, 2026, 9:57:59 PMMar 9
to R-inla discussion group
Dear Elias,
thank you for your response, this has been super helpful. I had misunderstood how to configure random slopes in INLA. I think I now figured it out. The posteriors from brms and INLA now converge reasonably well (those from INLA are still a bit narrower). I attach my updated script because it might be useful for others. 
All the best
Klaus
LinearModel.INLA.brms.R

Helpdesk (Haavard Rue)

unread,
Mar 10, 2026, 2:58:49 AMMar 10
to Klaus Oberauer, R-inla discussion group
With Gaussian likelihood there are not approximations in the latent, just the
hyperparameters and the integration.

I would suggest to fix all hyperparameters and check. the results for the latent
should be identical.

Then you can un-fix one and one (or in blocks) and see what happen.

to run the identical same model, is actually more challenging than one might
think.
Håvard Rue
he...@r-inla.org

Elias T. Krainski

unread,
Mar 10, 2026, 3:06:01 AMMar 10
to R-inla discussion group
But now you have two random slopes with INLA, but only one on brms.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.

Klaus Oberauer

unread,
Mar 10, 2026, 3:40:18 AMMar 10
to R-inla discussion group
Yes, you're right. IV2 should not have a random slope because it is a between-subjects variable. I removed the random slope for IV2 from the INLA model. The outcome is the same (I would have been surprised if not, as this random slope cannot do anything). INLA still produces somewhat narrower posteriors than brms. I'll try to figure out why by experimenting a bit. If you have any idea, please let me know. Thanks for all your help!
Klaus

Helpdesk (Haavard Rue)

unread,
Mar 10, 2026, 3:56:43 AMMar 10
to Klaus Oberauer, R-inla discussion group
the reason is given in my previous response "On Tue, 10 Mar 2026 at 09:58,
Helpdesk (Haavard Rue)"

you may also use the model 'intslope' (see inla.doc('intslope')) that sometimes
is helpful

fix all hyperparameters, and then add them back sequensially...
Håvard Rue
he...@r-inla.org

Klaus Oberauer

unread,
Mar 10, 2026, 4:57:28 AMMar 10
to R-inla discussion group
Dear Haavard,
sorry, I must have failed to understand your previous response. Are you saying that INLA produces narrower posteriors than brms because it does not involve approximations (whereas brms does)? Why should that lead to narrower posteriors in INLA? 
You suggested to fix all hyperparameters - do you mean in INLA or in brms? How do I fix the hyperparameters beyond what I'm doing already? 
All the best, 
Klaus

Klaus Oberauer

unread,
Apr 23, 2026, 1:18:33 PMApr 23
to Helpdesk, R-inla discussion group
Dear INLA team,
I ran a simulation-based calibration (based on Talts et al., 2018) to find out more about the discrepancy in posteriors from brms and INLA. The result (see figure) is that brms is well calibrated whereas INLA posteriors are overly narrow (resulting in an excess of simulation runs where the true value has an extremely low or extremely high rank among the posterior draws). I attach the R script so you can check whether I made a mistake.
All the best
Klaus
Calibration.INLA.brms.PriorsINLA.jpeg
LinearModel.INLA.Calibration.R

Finn Lindgren

unread,
Apr 23, 2026, 1:32:26 PMApr 23
to Klaus Oberauer, Helpdesk, R-inla discussion group
Hi,
As you note in the code comments, the prior you set for the random effects precision (gamma on precision) isn’t the same as the one you simulate from, and not the same as you use in brms (exponential on the std.dev.?).
From what I recall, the pc prior for precision is an exponential on sd, se it should be possible to use that and figure out the parameters needed to make the identical prior. I suspect the tail behaviours of the priors you use are quite different (and the data matches brms; have you tried the opposite version, where the data matches the prior you use in inla?). As a consequence, I think you’re just seeing the effect of data/model misspecification, where one estimation method has the true model and the other is given the wrong model.
Finn

> On 23 Apr 2026, at 19:18, Klaus Oberauer <k.obe...@psychologie.uzh.ch> wrote:
>
> Dear INLA team,
> --
> You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.
> To view this discussion, visit https://groups.google.com/d/msgid/r-inla-discussion-group/GVAP278MB07780E9C79060F933D48AB7CBC2A2%40GVAP278MB0778.CHEP278.PROD.OUTLOOK.COM.
> <Calibration.INLA.brms.PriorsINLA.jpeg>
> <LinearModel.INLA.Calibration.R>

Klaus Oberauer

unread,
Apr 23, 2026, 2:49:36 PMApr 23
to Finn Lindgren, Helpdesk, R-inla discussion group
Dear Finn,
thank you for your comment. The result I sent you (and the code) is from a simulation in which I use the INLA priors to sample the random-effects precision. I also ran the opposite version, sampling from the brms priors, and I get the same result.

Finn Lindgren

unread,
Apr 24, 2026, 5:16:49 AMApr 24
to Klaus Oberauer, Helpdesk, R-inla discussion group
Ah, sorry, I missed that part of the code. (But then the question is why the histogram for brms doesn't detect that discrepancy...)

Can you send a version of the code we can run? (I.e. that includes rlgamma and other functions from your local files;
if (computer==1) source("C:/mlsim/R/toolbox/BayesFunctions.R")
if (computer==2) source("C:/daten/R/bayes/toolbox/BayesFunctions.R")


Finn
--
Finn Lindgren
email: finn.l...@gmail.com

Klaus Oberauer

unread,
Apr 24, 2026, 5:34:17 AMApr 24
to Finn Lindgren, Helpdesk, R-inla discussion group

Sorry, I had forgotten to include the BayesFunctions – attached now.

The rlgamma function is in from the VGAM package.

I think the reason why brms is not bothered by the discrepancy between its prior and the sampling function is that the prior on random effects does not affect the posterior on fixed effects.

All the best

Klaus

BayesFunctions.R

Finn Lindgren

unread,
Apr 24, 2026, 10:47:55 AMApr 24
to Klaus Oberauer, Helpdesk, R-inla discussion group
Hi,

It's not clear to me what distribution
rlgamma(N, a,b)
actually generates (which parameters are which?) but I'm also convinced you should generate gamma variables;
maybe rgamma(N, 15, 0.3), but that's unclear as I'm not sure how the gamma parameters relate.
The inla precision parameter priors are for log(precision), so that the distribution for the precision is gamma, so you definitely should simulate predictions from some gamma distribution,
and use that in the estimation as priors.

Second, you have a fixed observation noise parameter "sd" but you don't explicitly set the prior for that in the inla() estimates, so it will use a default prior instead.
You need to add
  control.family=list(hyper=list(prec=rand.priors))
if the observation noise precision should have the same prior as the random effects component precisions (which I _think_ is what the brms setup does).

Third, the gamma samples look nothing like samples from Exp(4); the variability is _much_ smaller, so that's another difference between your brms and inla setups.

Check for example this:
plot.ecdf(rlgamma(1e4,15,0.3)^-0.5,col=1,xlim=c(0,1.5))
plot.ecdf(rgamma(1e4,15,0.3)^-0.5,col=2,add=TRUE)
plot.ecdf(rgamma(1e4,1,4),col=3,add=TRUE)

I'm rerunning with precision samples matching the prior, and control.family set, but I would be reluctant to investigate further until I'm convinced the rest of the experimental setup is correct.

Finn

Finn Lindgren

unread,
Apr 24, 2026, 12:40:37 PMApr 24
to Klaus Oberauer, Helpdesk, R-inla discussion group
Hi Klaus,

with the correction from rlgamma() to rgamma(), and setting the observation precision prior, it appears the discrepancy is gone, and the reason brms got it "right" despite the simulation error is that you set a much much wider precision prior there, and the priors set for inla were very narrow, so there was a clear mismatch between simulations and priors.

I've attached an ECDF plot for the ranks based on 36 iterations, showing no systematic difference between the distributions. (I didn't save it but, the corresponding plot for the original code showed the same discrepancy as your histograms indicated).
(Both the brms and inla results show a slight deviation from uniformity for low ranks, but that may go away with more iterations, and in any case brms and inla are indistingushable:
> ks.test(rank.brms[1:35],rank.inla[1:35])

	Exact two-sample Kolmogorov-Smirnov test

data:  rank.brms[1:35] and rank.inla[1:35]
D = 0.11429, p-value = 0.9794 
alternative hypothesis: two-sided

Finn
brms_inla_comparison.pdf

Klaus Oberauer

unread,
Apr 24, 2026, 12:54:07 PMApr 24
to Finn Lindgren, Helpdesk, R-inla discussion group

Hi Finn,

thanks for explaining how the prior on random effects works in INLA. And I take your point about the prior on the trial-by-trial noise, which I had left implicit.

Good to see that the discrepancy is now resolved. Would you be willing to share your code so that I can make sure to apply INLA correctly to (linear) mixed-effects models?
All the best

Finn Lindgren

unread,
Apr 24, 2026, 1:01:55 PMApr 24
to Klaus Oberauer, Helpdesk, R-inla discussion group
Sure, here's my modified file.

Since the "sd" is fixed and not simulated from its prior, I'm not _entirely_ sure the resulting ranks should be uniform, but at least now they should behave the same for brms and inla (except for potential issues due to the Exp(4) prior in the brms setup)

You probably will want to more carefully consider the actual prior for the precisions, as now it simulates from a pretty narrow range.

Finn
LinearModel.INLA.Calibration.R

Klaus Oberauer

unread,
Apr 29, 2026, 3:42:49 AMApr 29
to Finn Lindgren, Helpdesk, R-inla discussion group

Dear Finn,

thanks a lot. I reproduced your result with 2000 simulations, and INLA now produces a uniform rank distribution, showing good calibration.

I also tried a different approach, explicitly adding a prior on the trial-by-trial noise within cells to INLA that matches the sampling distribution for that noise in the data generation (an exponential distribution with rate=1). Again, INLA (as well as brms) show good calibration.

It looks like the default prior of INLA on trial-by-trial noise led to the miscalibration in my previous simulations, whereas the default prior for trial-by-trial noise in brms did not. I’m curious now: What is the default prior in INLA? In brms it is a half-Student-t(3, 0, 0.25) on SD.

Reply all
Reply to author
Forward
0 new messages