Rejecting a Model; Interpreting Poor Model Fit

197 views
Skip to first unread message

Jeff Clement

unread,
Nov 10, 2019, 9:57:11 PM11/10/19
to lavaan
Hello All,

This question isn't directly related to lavaan, though I did execute the analysis in lavaan. My question relates to SEM more broadly and I would appreciate any thoughts the community has.  Specifically, I'm looking for advice on how to discuss/describe a poorly fitting model, particularly with regard to the "absence of evidence is not evidence of absence" problem.

My project is an early work exploring a trust concept that has been well-documented (and explored with SEM) in other settings in a new context; I have theoretical reasons to believe and qualitative findings from interviews that the model doesn't apply in this context (i.e. many people literally said, without be led on, "it doesn't work like that in this setting.).   I set it up basically as a competing hypothesis scenario, and my alternative looks good quantitatively and I have qualitative feedback from participants explaining their rationale that lines up with the new hypothesis.

I actually want to reject this model!  So I ran an experiment (well-powered according to the Sattori and Saris 1985 method) and fit is pretty bad.  RMSEA 0.13; CFI 0.68; chi^2 p-value=0.000.  Basically, I'd like to say, "The model, similar to that used in other contexts, was a poor fit in this context."  A few of the coefficients are statistically significant, but my thoughts are that if the model fit is bad, it's somewhat dangerous to interpret significance, yes?  My concern is someone saying, "well look, these paths are significant, so even if the model fit is bad we shouldn't dismiss it."

Thanks in advance!  

Edward Rigdon

unread,
Nov 11, 2019, 9:41:59 AM11/11/19
to lav...@googlegroups.com
So some parameter estimates are "significant"? What does that mean? According to the American Statistical Association, it doesn't mean anything. "Significant" parameter estimates do not validate a model.

The theory behind estimation is that IF the model is correct in the population AND distributional assumptions hold, THEN parameter estimates are unbiased. Parameter estimates may still be unbiased even if the model is wrong or assumptions fail, but there is no basis for believing that. Indeed, even if the larger model is wrong, some parameters may be a part of both the wrong model and some hypothetical correct model.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/bc858c4f-796f-4bd5-9008-c4e7cfa0cd99%40googlegroups.com.
Wasserstein et all 2019 Amer Stat Moving to a World Beyond p 0 05.pdf

Jeff Clement

unread,
Nov 11, 2019, 10:13:36 AM11/11/19
to lavaan
Thank you for the response and the reference, Dr. Rigdon. This helps put into words my gut feelings.  
To unsubscribe from this group and stop receiving emails from it, send an email to lav...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages