Dear everyone,
I am testing the measurement invariance by group of school. I used my syntax as followings:
configschool<-cfa(model, data = data1, group="School",estimator = "mlm")
weekschool <- cfa(mode, data = data1, group="School",estimator = "mlm", group.equal = c("loadings"))
strongschool<-cfa(model,data = data1,group = "School", estimator = "mlm",group.equal = c("loadings","intercepts"))
strictschool <- cfa(model,data = data1,group = "School", estimator = "mlm",group.equal = c("loadings","intercepts", "residuals"))
measurementInvariance(model, data = data1, group = "School", estimator = "mlm")
Then my output is below:
Measurement invariance models:
Model 1 : fit.configural
Model 2 : fit.loadings
Model 3 : fit.intercepts
Model 4 : fit.means
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
fit.configural 126 14645 15061 321.95
fit.loadings 144 14658 15003 370.53 14.916 18 0.667741
fit.intercepts 162 14650 14924 398.80 41.853 18 0.001159 **
fit.means 166 14643 14901 400.07 1.306 4 0.860438
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Fit measures:
cfi.scaled rmsea.scaled cfi.scaled.delta rmsea.scaled.delta
fit.configural 0.908 0.039 NA NA
fit.loadings 0.930 0.032 0.022 0.007
fit.intercepts 0.895 0.037 0.034 0.005
fit.means 0.899 0.036 0.003 0.001
I made use of CFI change to decide if there are some measurement invariances. In the section of Fitmeasure, row "fit.loadings" the cfi.scaled.delta = .02 which is greater than the cutpoit .01 indicating that there is no measurement invariance between config model and week model. However, I found there is no significant difference between config model and week model in the section of "Scaled Chi Square Difference Test..." (p=.68).
In addition, I tested the differences between each pair of model and found no significant difference between them
> anova(configschool, weekschool)
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
configschool 126 14645 15061 321.95
weekschool 144 14658 15003 370.53 14.916 18 0.6677
> anova(configschool, strongschool)
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
configschool 126 14645 15061 321.95
strongschool 162 14650 14924 398.80 39.085 36 0.333
> anova(configschool, strictschool)
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
configschool 126 14645 15061 321.95
strictschool 184 14735 14921 527.40 56.597 58 0.5276
So, according to the results from the testings of anova, there is no significant difference across models. But according to the results displayed in the section of "Fit measures" it seems to have significant differences across models.
Those results above seem inconsistent. Because I am new to measurement invariance, I am a bit confused about those results. Could anyone please help me exlain:
1. Are there any differences between those models?
2. Is there difference between results from the testings of anova and of measurementInvariance?
Because I want to test the construct validity, could anyone please tell me if there is measurement invariance then which level of measurement invariance is sufficient enough to conclude the construct validity of my scale (config, week or strong, etc,...)?
Thank you very much in advance!
Kind regards,
Torres.