Hi,
Christian is right that there are multiple different ways Cis can be calculated. We do not discuss these different ways in the ORM article because it is long enough already without that discussion. (Similarly, we could discuss the different ways nested model comparisons can be implemented and the different ways how disattenuated correlation can be implemented)
As a practical matter, I would personally just look at the CIs and not do the LR test at all. There is practically no difference between the performance of these tests. The reasons why I would go for LR is if a reviewer asks me to do so. In both cases, scale the LVs by fixing their variances instead of standardizing the estimates after estimation. The discriminantValidity function will automatically re-estimate incorrectly scaled models and will not rescale estimates to their standardized values. As such, I do not think that any results that the function prints out use delta method.
Mikko
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/AM0PR04MB5955E53C4A6DE38EBC547920FC26A%40AM0PR04MB5955.eurprd04.prod.outlook.com.
Hi,
We must have a different understanding of what “delta method” means. I understand delta method as a technique for approximating a distribution of a nonlinear function of an asymptotically normal parameter estimate. This involves multiplying the variance estimates with the first derivatives of the parameter estiamates from both sides (https://en.wikipedia.org/wiki/Delta_method). Lavaan does not calculate CIs this way but uses what I call normal approximation method.
See lines 479-480 https://github.com/yrosseel/lavaan/blob/master/R/lav_object_methods.R
The discriminantValidity function already supports bootstrap CIs. If the lavaan object was esetimated with boostrap SEs, the function reports percentile intervals. This is inherited behavior from parameterEstimates. I just added an option to do other kinds of boostrap CIs, following the options of parameterEstimates.
I do not understand how MonteCarloCIs would work in this context.
Best regards,
Mikko
.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/AM0PR04MB595531852E612D1A4DFEE415FC27A%40AM0PR04MB5955.eurprd04.prod.outlook.com.
Hi,
”Do this mean that the CI method and the LR test lead to different conclusions?” If you draw a binary conclusion based on a strict rule, then the answer is yes. But if we ask if the CI upper limit of 0.9016559 and p value of 0.04 for LR test that est < .9 are substantially different results, I would say that they are not.
There are a number of reasons why the test might differ, and this is a general feature of SEM. For some discussion, see
Gonzalez, R., & Griffin, D. (2001). Testing parameters in structural equation modeling: Every “one” matters. Psychological Methods, 6(3), 258-269. https://doi.org/doi:10.1037/1082-989X.6.3.258
(The article discusses scaling differences, which is not relevant to this case, but I think it explains quote well why variance estimates and tests might differ between different techniques.)
Consider the following code:
## The industrialization and Political Democracy Example
## Bollen (1989), page 332
model <- '
# latent variable definitions
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + a*y2 + b*y3 + c*y4
dem65 =~ y5 + a*y6 + b*y7 + c*y8
# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60
# residual correlations
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8
'
fit <- sem(model, data = PoliticalDemocracy)
summary(fit, fit.measures = TRUE)
model <- '
# latent variable definitions
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + a*y2 + b*y3 + c*y4
dem65 =~ y5 + a*y6 + b*y7 + c*y8
# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60
# residual correlations
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8
'
fit2 <- sem(model, data = PoliticalDemocracy)
lavTestLRT(fit, fit2)
The two models differ in that the second does not include the y1 y5 error covariance. The estimate from the first model is
Covariances:
Estimate Std.Err z-value P(>|z|)
.y1 ~~
.y5 0.583 0.356 1.637 0.102
The LR test gives
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fit 38 3153.6 3218.5 40.179
fit2 39 3154.7 3217.2 43.207 3.0274 0.16441 1 0.08187 .
These two p values should be the same asymptotically, but in small samples they will differ. In some cases this difference leads to values that are on different sides of a cutoff.
Mikko
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/a40b7de4-b6db-4755-b3e0-406b3ed010e3n%40googlegroups.com.