If the model you want is
eta = beta_0 + beta_x * x
but you pre-standardise to x_std = (x-mean(x)) / sd(x), that means
eta = beta_0 + beta_x * (x.std * sd(x) + mean(x)) = beta_0 + beta_x
* mean(x) + beta_x * sd(x) * x_std
This shows that the standardisation changes the interpretation of the
parameters _as seen by inla_, which is
eta = beta_0_std + beta_x_std * x_std
with beta_0_std = beta_0 + beta_x * mean(x) and beta_x_std = beta_x * sd(x).
So to recover beta_x, you need
beta_x_std / sd(x),
and to recover beta_0, you need
beta_0_std - beta_x * mean(x) = beta_0_std - beta_x_std / sd(x) * mean(x)
If all you want is prediction based on new data x_new, you can alternatively do
x_new_std = (x_new - mean(x))/sd(x)
and
beta_x_std * x_new_std
instead of converting beta_x_std to beta_x.
The trap you need to avoid is the temptation to use x_std = scale(x),
and then x_new_std = scale(x_new), which is completely different to
what I detailed above, as it would instead use mean(x_new) and
sd(x_new) in the conversion, which completely breaks the definition of
the model, which is based on standardising with the mean and sd of the
_original_ data.
Finn
> To view this discussion, visit
https://groups.google.com/d/msgid/r-inla-discussion-group/ee72b1a1-6b1b-4a02-bbda-1265df7dca88n%40googlegroups.com.
--
Finn Lindgren
email:
finn.l...@gmail.com