Making sense of measurement invariance output: What does fit.configural output mean?

2,557 views
Skip to first unread message

zrc2...@gmail.com

unread,
May 5, 2016, 10:18:40 PM5/5/16
to lavaan
Hello-

I am completing a CFA and would like to compare measurement variance across two groups. I have a very elementary understanding of what the measurementinvariance output represents and would appreciate some clarification in how I am interpreting the following output. Am I correct to conclude that

1) configural invariance holds: i.e. the two groups have identical number and type of factors and same set of significant loadings [This is the part that I am most unclear on. If the factor structures differed for the two groups, how would the output signal this?]

2) despite the significant chi-square diff tests, the Delta-CFI may support strong variance using <.01 cut off; while only configural invariance holds applying the <.002 cut off? (https://groups.google.com/d/msg/lavaan/FQ5EWmclbjI/z3Dtv4Z5AAAJ)

Many thanks



>
config=cfa(MODEL, data=Data, group="gp") > weak=cfa(MODEL, data=Data, group="gp", group.equal="loadings")
> strong=cfa(MODEL, data=Data, group="gp", group.equal = c("loadings","intercepts"))
> strict=cfa(MODEL, data=Data, group="gp", group.equal = c("loadings","intercepts", "residuals"))
> anova(config, weak, strong, strict)

Chi Square Difference Test Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq) config 804 36543 37357 1719.1 weak 831 36733 37429 1785.7 66.64 27 3.342e-05 *** strong 858 36562 37139 1845.3 59.53 27 0.0003039 *** strict 888 36862 37308 2205.8 360.48 30 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

>
measurementInvariance(MODEL, data=Data, group="gp") Measurement invariance models: Model 1 : fit.configural Model 2 : fit.loadings Model 3 : fit.intercepts Model 4 : fit.means Chi Square Difference Test Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq) fit.configural 804 36718 37533 1716.6 fit.loadings 831 36733 37429 1785.7 69.126 27 1.493e-05 *** fit.intercepts 858 36741 37318 1847.0 61.261 27 0.0001799 *** fit.means 861 36768 37332 1880.1 33.100 3 3.068e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Fit measures: cfi rmsea cfi.delta rmsea.delta fit.configural 0.918 0.062 NA NA fit.loadings 0.915 0.062 0.004 0.000 fit.intercepts 0.911 0.063 0.003 0.000 fit.means 0.909 0.063 0.003 0.001


Sunthud Pornprasertmanit

unread,
May 5, 2016, 11:43:10 PM5/5/16
to lavaan
Use 

models <- measurementInvariance(MODEL, data=Data, group="gp")
models[[1]]
models[[2]]
models[[3]]
models[[4]]

You will see the models used for each type of constraint.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

zrc2...@gmail.com

unread,
May 6, 2016, 8:01:40 PM5/6/16
to lavaan
Thank you for your suggestion.
Given the absolute fit of the configural model, does the following result indicate that the configural invariance does not exist (also, rmsea does exceed .06 (.062)? If this is the case, should the conclusion be that: the two groups do not have identical number and type of factors and/or the same set of significant loadings? I remain a bit unclear on the interpretation of the output, specifically pertaining to the significance of the configural invariance. 

> models=measurementInvariance(MODEL, data=Data, group="grp")

 

Measurement invariance models:

 

Model 1 : fit.configural

Model 2 : fit.loadings

Model 3 : fit.intercepts

Model 4 : fit.means

 

Chi Square Difference Test

 

                Df   AIC   BIC  Chisq Chisq diff Df diff Pr(>Chisq)   

fit.configural 804 36543 37357 1719.1                                 

fit.loadings   831 36557 37252 1786.3     67.156      27  2.827e-05 ***

fit.intercepts 858 36562 37139 1845.3     59.009      27  0.0003552 ***

fit.means      861 36590 37154 1879.2     33.920       3  2.060e-07 ***

---

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

 

 

Fit measures:

 

                 cfi rmsea cfi.delta rmsea.delta

fit.configural 0.918 0.062        NA          NA

fit.loadings   0.914 0.063     0.004       0.000

fit.intercepts 0.911 0.063     0.003       0.000

fit.means      0.908 0.064     0.003       0.001

 

> models[[1]]

lavaan (0.5-20) converged normally after  71 iterations

 

                                                  Used       Total

  Number of observations per group        

  2                                                377         380

  1                                                209         209

 

  Estimator                                         ML

  Minimum Function Test Statistic             1719.103

  Degrees of freedom                               804

  P-value (Chi-square)                           0.000

 

Chi-square for each group:

 

  2                                           1058.058

  1                                            661.045

> models[[2]]

lavaan (0.5-20) converged normally after  65 iterations

 

                                                  Used       Total

  Number of observations per group        

  2                                                377         380

  1                                                209         209

 

  Estimator                                         ML

  Minimum Function Test Statistic             1786.259

  Degrees of freedom                               831

  P-value (Chi-square)                           0.000

 

Chi-square for each group:

 

  2                                           1081.391

  1                                            704.869

> models[[3]]

lavaan (0.5-20) converged normally after  88 iterations

 

                                                  Used       Total

  Number of observations per group        

  2                                                377         380

  1                                                209         209

 

  Estimator                                         ML

  Minimum Function Test Statistic             1845.268

  Degrees of freedom                               858

  P-value (Chi-square)                           0.000

 

Chi-square for each group:

 

  2                                           1102.874

  1                                            742.394

> models[[4]]

lavaan (0.5-20) converged normally after  89 iterations

 

                                                  Used       Total

  Number of observations per group        

  2                                                377         380

  1                                                209         209

 

  Estimator                                         ML

  Minimum Function Test Statistic             1879.188

  Degrees of freedom                               861

  P-value (Chi-square)                           0.000

 

Chi-square for each group:

 

  2                                           1113.577

  1                                            765.611

Terrence Jorgensen

unread,
May 9, 2016, 6:10:45 AM5/9/16
to lavaan
Given the absolute fit of the configural model, does the following result indicate that the configural invariance does not exist (also, rmsea does exceed .06 (.062)? If this is the case, should the conclusion be that: the two groups do not have identical number and type of factors and/or the same set of significant loadings? I remain a bit unclear on the interpretation of the output, specifically pertaining to the significance of the configural invariance.

There is no simple diagnosis for why a configural model does not fit well.  It could be that the cross-loadings or correlated residuals exists in one group but not another, which would only necessitate freeing parameter but retaining the same basic structure.  But as you suggested, it might also indicate the groups have fundamentally different factor structures.

Interestingly, the fit of the configural model does not just reflect group differences in model structure, but also overall model fit.  That is, you might have mediocre model fit, but the model might be equally (in)appropriate in both groups.  The semTools packages also has a function called permuteMeasEq(), which uses permutation methods to test configural invariance without confounding that test with overall model fit.  But that method has not yet been investigated much, so it is not clear whether you should try to address the overall fit of the model before or after testing configural invariance.  If you have any theoretically driven reasons to free certain parameters or posit competing models for your data, you should probably try fitting those models before using any data-driven techniques to tweak your model.

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

zrc2...@gmail.com

unread,
May 18, 2016, 10:49:25 PM5/18/16
to lavaan
Thank you very much for your answer and I apologize for a late reply. I am in the process of tackling the suggestions you offered. Again, thank you for your very thorough response.
Reply all
Reply to author
Forward
0 new messages