Extracting the Posterior variance-covariance matrix of Fixed effects

938 views
Skip to first unread message

Zhao Xing

unread,
Feb 19, 2014, 2:51:35 PM2/19/14
to r-inla-disc...@googlegroups.com
Dear experts in INLA

I know the  "lincomb" features. But for Fixed effects, is there an easy way to extract the posterior variance-covariance matrix.


Best,
Xing

INLA help

unread,
Feb 19, 2014, 2:57:12 PM2/19/14
to Zhao Xing, r-inla-disc...@googlegroups.com
yes.

inla(..., control.fixed = list(
correlation.matrix=TRUE))

see ?control.fixed

H

--
Håvard Rue
he...@r-inla.org

Zhao Xing

unread,
Feb 19, 2014, 4:44:25 PM2/19/14
to r-inla-disc...@googlegroups.com, Zhao Xing, he...@r-inla.org
Thanks, Håvard.

I noticed someone asked a similar question for the random effects in the Besag model, and the answer is no.

Does this apply to all sorts of random effects? or some random effects can have their posterior posterior variance-covariance matrix in the INLA output.


Xing

INLA help

unread,
Feb 19, 2014, 5:13:55 PM2/19/14
to Zhao Xing, r-inla-disc...@googlegroups.com
On Wed, 2014-02-19 at 13:44 -0800, Zhao Xing wrote:
> Thanks, Håvard.
>
>
> I noticed someone asked a similar question for the random effects in
> the Besag model, and the answer is no.
>
>
> Does this apply to all sorts of random effects? or some random effects
> can have their posterior posterior variance-covariance matrix in the
> INLA output.

this applies to the 'fixed' effects only. for the 'random effects' you
have to define linear combinations, and then use

control.inla = list(lincomb.derived.correlation.matrix=TRUE)

Xing Zhao

unread,
Feb 19, 2014, 8:36:31 PM2/19/14
to he...@r-inla.org, r-inla-disc...@googlegroups.com
Two more questions

1. After checking the FAQ, I presumably think the posterior correlation matrix (/variance matrix) should be accessed by result$misc$lincomb.derived.correlation.matrix (/result$misc$lincomb.derived.covariance.matrix). So, if I specify the two: control.fixed = list(correlation.matrix=TRUE), and control.inla = list(lincomb.derived.correlation.matrix = TRUE). What is the output for? fixed effects or lincomb?

2. The following small simulation shows some weird results for me.

> set.seed(24)
> library(INLA)
> n = 1000
> x1 = sort(runif(n))
> x2 = sort(runif(n))
> library(MASS)
> Sigma <- matrix(c(10,3,3,2),2,2)
> Sigma #This will be the real covariance matrix
     [,1] [,2]
[1,]   10    3
[2,]    3    2
> beta <- mvrnorm(n=1000, c(-2,7), Sigma)
> y = 1 + beta[,1] * x1 + beta[,2]*x2 + rnorm(n, sd = 0.1)
> formula = y ~ 1 + x1 + x2
> result = inla(formula,
+               data = data.frame(x,y),
+               control.fixed = list(correlation.matrix=TRUE),
+               family = "gaussian")
> result$misc$lincomb.derived.covariance.matrix
            (Intercept)          x1          x2
(Intercept)  0.02360268   0.2343025  -0.2690078
x1           0.23430254  97.0017226 -97.0702108
x2          -0.26900779 -97.0702108  97.2078268

I know this may be conceptually wrong, and the beta should be treated as random effect rather than fixed.
How to understand the posterior covariance between fixed effects?  If you can get back to me a small example, I will be really appreciate.


Thanks for your time
Xing

INLA help

unread,
Feb 20, 2014, 1:19:32 AM2/20/14
to Xing Zhao, r-inla-disc...@googlegroups.com
On Wed, 2014-02-19 at 17:36 -0800, Xing Zhao wrote:
> Two more questions
>
>
> 1. After checking the FAQ, I presumably think the posterior
> correlation matrix (/variance matrix) should be accessed by result
> $misc$lincomb.derived.correlation.matrix (/result$misc
> $lincomb.derived.covariance.matrix). So, if I specify the
> two: control.fixed = list(correlation.matrix=TRUE), and control.inla =
> list(lincomb.derived.correlation.matrix = TRUE). What is the output
> for? fixed effects or lincomb?

The posterior correlation matrix for the fixed effects, is implemented
through linear.combinations, just make one lincomb for each fixed
effect. the option in control.fixed is just a short-hand for this. if
you have additional lincombs, these new ones will be added.
yes, you simulate from a model with 'correlated random effects' but
estimate it with fixed effects, so... you can of'course change the model
so its correct according to the data or the oposite.

the 'fixed effects' in the Bayesian context, is just a (Gaussian)
variable with a prior mean and precision, and conditioning on data gives
you the posterior distribution for them and the posterior correlation
matrix is the correlation matrix between these three 'fixed effects',
and so with the covariance matrix.

the phrase 'fixed' and 'random' does not make much sense in the Bayesian
context, as only the prior is different, but we still use the phrase
still. a fixed effect have prior N(0, prec) and both mean and prec are
fixed. a random effect have prior (f.ex) N(0, prec) and prec is random
as well with its own prior.

Best
Reply all
Reply to author
Forward
0 new messages