Hi Kay,
You can read details in the help-page section on Higher-Order Factors. Basically, your first example isn't modeling a purely higher-order construct because it has both manifest and common-factor indicators. The formula sort of assumes that a higher-order factor only has common-factor indicators, because that is how the higher-order construct variance is partitioned from the lower-order constructs' error variance and the manifest-variables' error variance. Calculating the help-page equation manually reveals what is happening:
## common-factor variance using "out1"
(L1 <- lavInspect(out1, "est")$lambda)
(B1 <- lavInspect(out1, "est")$beta)
(P1 <- lavInspect(out1, "est")$psi)
sum( L1 %*% B1 %*% P1 %*% t(B1) %*% t(L1) ) # 5.068055
## common-factor variance using "out2"
(L2 <- lavInspect(out2, "est")$lambda)
(B2 <- lavInspect(out2, "est")$beta)
(P2 <- lavInspect(out2, "est")$psi)
sum( L2 %*% B2 %*% P2 %*% t(B2) %*% t(L2) ) # 19.90495
Both are expressed relative to the observed total variance of a composite to obtain reliability:
(S <- lavInspect(out1, "sampstat")$cov) # same as out2
sum(S) # 35.31357
So the "out1" version basically ends up "zeroing out" all the common-variance contributions of x7-x9 from Lambda (first-order loadings on "general") because they get multiplied by zero in the Beta matrix (because the lower-order construct "general" does not load on the higher-order construct "general"). Of course, there is only one "general" factor, but that is what I mean about the equation assuming that a higher-order factor is "purely" higher-level, not a blend. I don't think the 0.144 reliability estimate from out1 is right, because this equation isn't set up to work with factors measured by both manifest and latent indicators.
But the 0.641 you get for "general" from out1 is not right either because without knowing to include the beta matrix, it only accounts for the manifest-indicator contributions (i.e., from x7-x9) to common-factor and total variance:
## or without the higher-order formula
sum( (L1 %*% P1 %*% t(L1))[7:9, 7:9] )
## relative to variance of x7-x9 only
sum( S[7:9, 7:9] )
## ratio == 0.641
You want reliability for a composite calculated from x1-x9, not x7-x9.
One solution would be for compRelSEM() to check whether the higher-order factor also has manifest indicators, then change its 0 in the Beta matrix to 1 so it doesn't cancel out the first-order loadings:
B1alt <- B1
B1alt["general", "general"] <- 1
B1alt
## calculate omega
sum( L1 %*% B1alt %*% P1 %*% t(B1alt) %*% t(L1) ) / sum(S) # 0.501
Alternatively, you can already just fit a statistically equivalent CFA that defines x7-x9 as single-indicator constructs, which then load on the higher-order construct. That will treat "general" as purely higher-order:
HS.model_3 <- '
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
X7 =~ x7
X8 =~ x8
X9 =~ x9
general =~ visual + textual + X7 + X8 + X9
'
out3 <- cfa(HS.model_3, df) # same df as out1
This yields the same omega estimate as changing Beta above
compRelSEM(out3, higher = "general") # 0.501
So I think 50.1% reliability is the "right" answer for this example. That is lower than the 56.4% from out2, but out2 is not a statistically equivalent model to out1 and out3. It is less restrictive (with 1 fewer df), and out2 fits significantly better:
anova(out1, out2)
So we can't expect the estimates to be the same.
Terrence D. Jorgensen (he, him, his)
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam
http://www.uva.nl/profile/t.d.jorgensen