\
In my analysis this is the case: the multivariate score test is non-significant (p = .42), while the strong model may not be preferred over the weak model (see above).
However, there are some univariate score tests that are borderline significant (e.g., p =.048). Am I conducting a 'proper statistical analysis' if I would turn to these univariate tests, while the multivariate test indicates non-significance?
lavTestScore(fit, release = 18:32)
Now, when I use partable() to find out with which constraints these 3 tests are associated with, I don't understand what they represent: I didn't name anything .p##. myself, so I guess it's done internally.
Lastly, I had thought that if the test would fail, that at least some intercepts (that I imposed to be equal over time) would pop up in the univariate tests. After all, that's the only thing that was changed between the weak and strong model. Where could I go from here to potentially show partial strong invariance?
lavTestScore(fit, release = 18:20)
Chi Square Difference Test
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
fit.configural 6 2660.6 2818.8 9.5528
fit.loadings 12 2651.7 2783.6 12.7212 3.1684 6 0.7874
fit.intercepts 18 2641.5 2747.0 14.5084 1.7872 6 0.9382
fit.means 20 2641.0 2737.8 18.0140 3.5056 2 0.1733
Fit measures:
cfi rmsea cfi.delta rmsea.delta
fit.configural 0.978 0.054 NA NA
fit.loadings 0.996 0.017 0.017 0.037
fit.intercepts 1.000 0.000 0.004 0.017
fit.means 1.000 0.000 0.000 0.000
Hello I need help with understanding my output for measurement invariance
Fit measures:
cfi rmsea cfi.delta rmsea.delta
fit.configural 0.978 0.054 NA NA
fit.loadings 0.996 0.017 0.017 0.037
fit.intercepts 1.000 0.000 0.004 0.017
fit.means 1.000 0.000 0.000 0.000
Chi Square Difference Test
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
fit.configural 6 2660.6 2818.8 9.5528
fit.loadings 12 2651.7 2783.6 12.7212 3.1684 6 0.7874
fit.intercepts 18 2641.5 2747.0 14.5084 1.7872 6 0.9382
fit.means 20 2641.0 2737.8 18.0140 3.5056 2 0.1733
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+unsubscribe@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
Also, the "proper" type of analysis you are hoping to perform, if you use alpha = 5% for your omnibus test, then conduct several follow-up tests, you should adjust for the number of tests to control the familywise Type I error rate (e.g., a Bonferroni adjustment, dividing alpha by the number of intercept-contraints you test following your LRT).
Lastly, I had thought that if the test would fail, that at least some intercepts (that I imposed to be equal over time) would pop up in the univariate tests. After all, that's the only thing that was changed between the weak and strong model. Where could I go from here to potentially show partial strong invariance?Well, the only 2 significant univariate tests are both associated with the intercept labeled .p57., although neither of them would be significant if you took steps to control your Type I error rate. To perform fewer tests (and have more power), you could run only 1 post-hoc test of that item's intercepts:
lavTestScore(fit, release = 18:20)If that particular 3-df post-hoc test is significant, release those constraints and compare the partial-strong model to the weak model using a LRT.
Thus, if I would test to release all intercepts constraints of one item (i.e. 3 in total), then p < (0.05/3) is needed?
By the way, shouldn't I also correct for the multiple chi-square tests that I perform to test for measurement invariance? (E.g., configural -> weak -> strong would be 2 tests). If so, does the P-value then need to be greater than P = (0.05*2)?
Shouldn't I also account for the first tests I did? That is, LavTestScore(fit) to identify which constraints needed to be released? Or how else should I know that I should release the intercepts associated with parameters 18:20, i.e. lavTestScore(fit, release = 18:20)?
Chi Square Difference Test
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
fit.configural 6 2660.6 2818.8 9.5528
fit.loadings 12 2651.7 2783.6 12.7212 3.1684 6 0.7874
fit.intercepts 18 2641.5 2747.0 14.5084 1.7872 6 0.9382
fit.means 20 2641.0 2737.8 18.0140 3.5056 2 0.1733
Fit measures:
cfi rmsea cfi.delta rmsea.delta
fit.configural 0.978 0.054 NA NA
fit.loadings 0.996 0.017 0.017 0.037
fit.intercepts 1.000 0.000 0.004 0.017
fit.means 1.000 0.000 0.000 0.000
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+unsubscribe@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
I am concerned with fit indices? Like why how CFI is 1.00 and rmsea is 0.00
I have same sample of n=200 and they have taken the three parallel test forms. I considered 'Form type' as group (so categorical variable is form A, B, and C).
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.