Goodness of Fit and Model Adaptation Lavaan

409 views
Skip to first unread message

steiner...@gmail.com

unread,
Feb 7, 2018, 12:05:42 PM2/7/18
to lavaan
Hi all,

 I am currently working on a theory building mixed-methods paper  in which I estimate a structural equation path model based on suggestive paths and links drawn from qualitative in-depth interviews. Based on existing literature and the insights from my qualitative analyses, I have built a structural equation model that is composed of the following:
1) One binary predictor variable (non-latent)
2) 1 latent outcome variable with three items (ordinal scale)
3) 1 latent variable with seven items (ordinal scale)
4) 1 latent variable with two items (continuous scale)
5) 1 latent variable with 14 items (ordinal scale)
6) 1 latent variable with 10 items (ordinal scale)

The sample size is 522 and for each of the latent variables, individual items are highly significant. However, goodness of fit for the full SEM (as well as for some of the individual measurement models for latent variables) is not optimal. According to CFI , model fit is very good (CFI=0.986), but according to RMSEA (=0.103) and SRMR (=0.104) it is rather poor,  also chi2 is 323404.72*** with df=754 indicates poor fit.

I have already inspected modification indices and correlated some error terms, but the model fit could not be  improved substantially. The model is very much embedded in theory and corresponds to my qualitative findings and I am therefore hesitant to discard it altogether. I was wondering whether you have any advice on how to address this issue and could share your thoughts on the following questions:

1) Do you think the above fit statistics are 'too bad' to include the model in a paper/send it to a journal?
2) Is there any reason for why model fit looks good according to CFI but poor according to RMSEA and SRMR? Where does this difference come from?
3) Do you have any other advice on how to possibly refine the model without neglecting theoretical foundations?

Thank you!

Terrence Jorgensen

unread,
Feb 12, 2018, 10:43:08 AM2/12/18
to lavaan
1) Do you think the above fit statistics are 'too bad' to include the model in a paper/send it to a journal?

Depends on your reviewers.

2) Is there any reason for why model fit looks good according to CFI but poor according to RMSEA and SRMR? Where does this difference come from?

Different indices tell you different things.  CFI is telling you that the hypothesized model is WAY better than a model that posits zero correlations between any of the variables.  RMSEA is telling you that all of the misfit, spread out across all the model's df, is higher than is typically desired.  SRMR is telling you that the model-implied (Pearson, polychoric, or polyserial) correlations differ by more than 0.1, on average, from the "observed" ones (or estimated ones for categorical data).

3) Do you have any other advice on how to possibly refine the model without neglecting theoretical foundations?

Global fit measures are not very informative about why your model fails.  So look for local fit measures.  I like looking at the correlation residuals (SRMR is the average of all of them, which is uninformative), because they tell you which specific bivariate relationships are under/overestimated by the fitted model.

resid(fit, type = "cor")

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Reply all
Reply to author
Forward
0 new messages