I'm running a multi-level random effects model with data from multiple studies, and am primarily interested in the slopes of both the fixed and random effects.
Thanks to help from Harvard Rue (
https://groups.google.com/g/r-inla-discussion-group/c/E8AIGDPz_-M/m/uagf6R01BQAJ) , the model runs quite efficiently, but when I compare the random slopes from this negative binomial model (see below) to the slopes I get when I analyse each dataset separately (mostly same model structure, in INLA), the random slopes are MUCH larger than the 'true' slopes. Also, for some datasets the sign of the slope is actually flipped.
They are correlated, though:

I've tried setting a smaller prior, but I get almost the same values.
Does anyone understand why this is happening? And what I can do about it?
Thanks
here's my model specification:
#prior:
prior.prec <- list(prec = list(prior = "pc.prec", param = c(0.3, 0.01)))
inla(NumberRounded ~ cYear +
f(Period_4INLA, model='iid', hyper = prior.prec)+
f(Datasource_4INLA, model='iid', hyper = prior.prec)+
f(DatasourceLoc_4INLA, model='iid', hyper = prior.prec)+
f(DatasourceLocPlot_4INLA, model='iid', hyper = prior.prec)+
f(Datasource_4INLAslope, iYear.scale, model='iid', hyper = prior.prec)+
f(DatasourceLoc_4INLAslope, iYear.scale, model='iid', hyper = prior.prec)+
f(DatasourceLocPlot_4INLAslope,iYear.scale, model='iid', hyper = prior.prec)+
f(iYear, model='ar1', ##constr = TRUE,
values = seq(val[1], val[2]),
hyper = list(prec = list(prior = "pc.prec",
param = c(sd.res, 0.01)),
rho = list(prior = "pc.cor1",
param = c(0.95, 0.5))),
replicate=Plot_ID_rep), # auto regression at plot level
family = 'nbinomial',
offset = log(E),
control.compute = list(
config = TRUE,
dic=TRUE,
waic=TRUE,
cpo = TRUE ),
control.predictor = list(link = 1) ,
verbose = F,
quantiles=c(0.001, 0.01, 0.025, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 0.95, 0.975, 0.99, 0.999) ,
num.threads = threads,
control.fixed = list(prec = 1, prec.intercept = 1),
data=completeDataPollinators)