> According to equation 1 in Bakk et al, 2014, the sampling (co)variance
> is equal to the negative of the Hessian matrix. However, I could not
> replicate the matrix obtained with the vcov() function by using the
> Hessian. Here is the code I used:
> HS.model <- ' visual =~ x1 + x2 + x3
> textual =~ x4 + x5 + x6
> speed =~ x7 + x8 + x9 '
>
> fit <- cfa(HS.model, data = HolzingerSwineford1939, orthogonal = TRUE)
>
> vcov <- lavInspect(object = fit, what = "vcov")
>
> hessian <- lavInspect(object = fit, what = "hessian")
>
> round(solve(-hessian), 3)
> round(vcov, 3)
Three things:
1) by default, lavaan uses 'expected' information (which is not using
the Hessian); to use the Hessian, switch to 'observed' information:
fit <- cfa(HS.model, data = HolzingerSwineford1939, orthogonal = TRUE,
information = "observed")
2) when maximizing the (log)likelihood, you indeed need the negative of
the Hessian; but lavaan is *minimizing* the ML discrepancy function, so
there is no need for the minus sign
3) you need to divide the hessian by the sample size to get to vcov:
round(sqrt(diag(vcov)), 3)
round(sqrt(diag(1/301*solve(hessian))), 3)
Yves.