Thanks,
yes you are correct. I interpret your post as a comment only, so I
follow up with my comment on it. Yes, it's is a cool trick.
Here are two models:
A. y ~ 1 + x with family="gaussian"
B. y ~ 1 + x + f(idx,model="iid"), with idx=1:n, family="gaussian" with
a fixed high precision.
the difference, is that the linear.predictor in A is 1+x, while in B its
1+x+f(idx) which is the *the observed data* if y[idx[i]] and the
predictive distribution for a new observation, if y[idx[i]]=NA. so what
you need to do is to replicate the model.
This simple(r) example, demonstrate this.
n = 100
x = rnorm(n)
y = 1 + x + rnorm(n, sd=0.1)
r = inla(y ~ 1 + x,
data = data.frame(y, x),
control.predictor = list(compute=TRUE),
control.family = list(
hyper = list(prec = list(param=c(1, 0.01)))))
xx = c(x, x)
yy = c(y, rep(NA, n))
ii = c(1:n, 1:n)
rr = inla(yy ~ 1 + xx +
f(ii, model="iid", hyper = list(prec = list(param=c(1,
0.01)))),
data = data.frame(yy, xx),
control.family = list(hyper = list(prec = list(initial=20,
fixed=TRUE))),
control.predictor = list(compute=TRUE))
## this should be almost zero
r$summary.fixed-rr$summary.fixed
## plot of the i'th observation with the predictive distribution as
## well.
i = 1
plot(inla.smarginal(rr$marginals.linear.predictor[[n+i]]),type="l")
abline(v=y[i])
title(paste("observation", i))
Best,
Håvard
--
Håvard Rue
he...@r-inla.org