Problem with -> Scaled Chi Square Difference Test (method = "satorra.bentler.2001")

714 views
Skip to first unread message

olivier pahud

unread,
Aug 14, 2017, 3:56:06 AM8/14/17
to lavaan
Hey there

I'm using the anova() in lavaan to get the scaled chi square difference test according to the method="sattora.bentler.2001".
Anyhow, when lavaan supposedly calculates the adjusted chi square difference, it always yields the unscaled results. It reports the unscaled Chisq and the unscaled Chisq difference... Shouldn't it report the scaled ones?
Additionally, no matter what method="xx" I specify, it always yields the same result?

thanks for an answer.
Olivier

Terrence Jorgensen

unread,
Aug 14, 2017, 7:08:12 AM8/14/17
to lavaan
It reports the unscaled Chisq and the unscaled Chisq difference... Shouldn't it report the scaled ones?

Yes, it reports the unscaled chi-squared statistic for each model, because that is how the test is performed.  But it performs that adjustment on the test statistic (i.e., the difference between models' naïve chi-squared values), so the difference statistic in the output is the scaled one.  The scaled overall test statistic for each model is irrelevant -- that is not the test you are performing when you compare models.

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

olivier pahud

unread,
Aug 14, 2017, 8:46:50 AM8/14/17
to lavaan
ok, thanks for clarification... However, I still have a problem... Lavaan showes me a different scaled difference statistic than when I use the sattora.bentler formula http://www.statmodel.com/chidiff.shtml (see: Difference Testing Using Chi-Square)

Nested Model:
MLM Chi-square: 56.818 with 38 df (or ML = 59.035)
Scaling correction: 1.039

Comparison model
MLM Chi-square: 56.388 with 37 df (or ML = 58.976)
Scaling correction: 1.046

scaled difference with formula: .06673
scaled fiference with lavaan: .07472

Why the difference?
Rounding?

Terrence Jorgensen

unread,
Aug 15, 2017, 7:52:39 AM8/15/17
to lavaan
Why the difference? Rounding?

If you want to check that possibility, you can print the fit measures with greater precision:

print(fitMeasures(fit0), nd = 6)

olivier pahud

unread,
Aug 15, 2017, 8:42:51 AM8/15/17
to lavaan
tada! thank u very much!!
Reply all
Reply to author
Forward
0 new messages