My code has been running for several hours now on a MacBook Pro, and while it says it's running... the progress bar (showProgress=TRUE) for the analysis still says 0%?
I know the more observations, variables, and permutations you have, the longer it will take to run...I just wanted to make sure that having '0%' in the console is normal despite the code running for over four hours now. In the past, when I accidentally mis-specified something, it would still take an hour or so to run just to return an error, so ideally I am trying to avoid wasting time running this for it to eventually terminate from some error of model mis-specification.
it sounds like I should just do the permuteMeasEq function within the multi-group CFA framework.
My goal is to detect uniform & non-uniform DIF...but I know the metric and scalar equivalence are fairly comparable anyways.
As a follow-up question. I have a CFA with three groups. The indicators were collected with 7-point likert scales, which I treat as continuous. Data is moderately non-normally distributed. Actually I would use a Satorra-Bentler Chisquare difference test and additionally look at some delta AFI. I understand that the permutation test gives clearer results to interpret than arbitrary cut-off values. For the chisquare difference test I now have at least three options: Satorra-Bentler, chisquare (ML) difference by permutation test or robust chisquare (MLR) by permutation test. Is any variant clearly preferable and, if so, for what reason?
--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/l3wgZT838Lg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
It ran within several hours versus the MIMIC approach, which was taking days to run and still not finishing.
I liked that the permutation method also adjusted for Type I errors
My sample is around 2000 people, so I was evaluating measurement invariance based on change in CFI/RMSEA, rather than the results of a LRT, as my understanding was larger samples can lead to over-rejection of measurement invariance tests when based solely on change in chi-square.
## baseline model
mod.base <- '
## loadings
visual =~ c(L1, L1)*x1 + c(L2, L2)*x2 + c(L3, L3)*x3
textual =~ c(L4, L4)*x4 + c(L5, L5)*x5 + c(L6, L6)*x6
speed =~ c(L7, L7)*x7 + c(L8, L8)*x8 + c(L9, L9)*x9
## factor correlations
visual ~~ c(Phi, Phi)*textual + c(Phi, Phi)*speed
textual ~~ c(Phi, Phi)*speed
## intercepts
x1 ~ c(T1, T1)*1
x2 ~ c(T2, T2)*1
x3 ~ c(T3, T3)*1
x4 ~ c(T4, T4)*1
x5 ~ c(T5, T5)*1
x6 ~ c(T6, T6)*1
x7 ~ c(T7, T7)*1
x8 ~ c(T8, T8)*1
x9 ~ c(T9, T9)*1
## residuals
x1 ~~ c(D1, D1)*x1
x2 ~~ c(D2, D2)*x2
x3 ~~ c(D3, D3)*x3
x4 ~~ c(D4, D4)*x4
x5 ~~ c(D5, D5)*x5
x6 ~~ c(D6, D6)*x6
x7 ~~ c(D7, D7)*x7
x8 ~~ c(D8, D8)*x8
x9 ~~ c(D9, D9)*x9
## define nu
nu1 := (L1^2) / D1
nu2 := (L2^2) / D2
nu3 := (L3^2) / D3
nu4 := (L4^2) / D4
nu5 := (L5^2) / D5
nu6 := (L6^2) / D6
nu7 := (L7^2) / D7
nu8 := (L8^2) / D8
nu9 := (L9^2) / D9
## constrain nu to a single estimate
nu1 == nu2
nu2 == nu3
nu3 == nu4
nu4 == nu5
nu5 == nu6
nu6 == nu7
nu7 == nu8
nu8 == nu9
'
fit.base <- cfa(mod.base, data = HolzingerSwineford1939, group = "school", std.lv = TRUE)
summary(fit.base)
## target model
HW.model <- '
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
'
## fit model with varying degrees of measurement equivalence
fit.inv <- measurementInvariance(model = HW.model, data = HolzingerSwineford1939,
group = "school", strict = TRUE)
summary(fit.inv) # names of models with the list
## test for nesting
net(null = fit.base, config = fit.inv$fit.configural,
metric = fit.inv$fit.loadings, scalar = fit.inv$fit.intercepts,
strict = fit.inv$fit.residuals, means = fit.inv$fit.means)
## calculate CFI for each level of measurement equivalence
(ncpEQ <- diff(fitMeasures(fit.base, c("df","chisq")))[[1]])
(ncpconfig <- diff(fitMeasures(fit.inv$fit.configural, c("df","chisq")))[[1]])
(ncpmetric <- diff(fitMeasures(fit.inv$fit.loadings, c("df","chisq")))[[1]])
(ncpscalar <- diff(fitMeasures(fit.inv$fit.intercepts, c("df","chisq")))[[1]])
(CFI.metric <- 1 - max(c(ncpmetric - ncpconfig, 0)) / max(c(ncpEQ - ncpconfig, ncpmetric - ncpconfig, 0)))
(CFI.scalar <- 1 - max(c(ncpscalar - ncpmetric, 0)) / max(c(ncpEQ - ncpmetric, ncpscalar - ncpmetric, 0)))
## Lai & Yoon must have made a typo on p. 242 (column 2), suggesting instead:
#(CFI.scalar <- 1 - max(c(ncpscalar - ncpmetric, 0)) / max(c(ncpEQ - ncpscalar, ncpscalar - ncpmetric, 0)))
For the chisquare difference test I now have at least three options: Satorra-Bentler, chisquare (ML) difference by permutation test or robust chisquare (MLR) by permutation test.
Is any variant clearly preferable and, if so, for what reason?