modification indices in a second-order model

102 views
Skip to first unread message

Emmanuel W

unread,
Oct 4, 2018, 12:27:34 PM10/4/18
to lavaan
Hello, 

I have a new question.

In a model with a second-order factor (factor), created from 4 latent variables (lat 1, lat 2, lat 3 and lat 4), I examine the association between factor and an outcome. 

I would like to know if adding one of the 4 latent variables would improve the model. But I don't know how to get the modification indices for the 4 first-order latent variables when the only regression is "outcome~factor"?

Thank you for your help!

Emmanuel W

unread,
Oct 5, 2018, 9:44:50 AM10/5/18
to lavaan
I think I may have found the solution. Shouldn't we add null direct effects to have the modification indices corresponding to these effects?

model <-
'lat1=~CESD3 + CESD6 + CESD9 + CESD10 + CESD14 + CESD17 + CESD18
lat2=~CESD4 + CESD8 + CESD12 + CESD16
lat3=~CESD1 + CESD2 + CESD5 + CESD7 + CESD11 + CESD13 + CESD20
lat4=~CESD15 + CESD19
factor=~lat1 + lat2 + lat3 + lat4

outcome~factor
outcome~a*lat1
a==0
outcome~b*lat2
b==0
outcome~c*lat3
c==0
outcome~d*lat4
d==0
'

I get many results but since the indices are huge, I don't know if I'm on the wrong track.

Terrence Jorgensen

unread,
Oct 6, 2018, 4:55:46 PM10/6/18
to lavaan
I don't know how to get the modification indices for the 4 first-order latent variables when the only regression is "outcome~factor"?

Use the lavTestScore() function, specifying the fixed parameter(s) you want to test using the add= argument.

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Emmanuel W

unread,
Oct 8, 2018, 7:54:20 AM10/8/18
to lavaan
Thank you very much Terrence!

I would, however, like to ask you a few more questions if you allow me:

1) I get this message with my results:

Warning message:
In lavTestScore(fit, add = test.mi) :
  lavaan WARNING: se is not `standard'; not implemented yet; falling back to ordinary score test


What does it mean and, above all, is it problematic?

2) I am interested in the direct effects between my latent variable (and the items that constitute them) and my outcome (aq_comport_tcstatutglobal_i). Here is an example of my results with two latent variables :                     

                                                   lhs op rhs     X2 df p.value
1     aq_comport_tcstatutglobal_i~dep ==   0  0.000  1   1.000
2     aq_comport_tcstatutglobal_i~som ==   0 29.743  1   0.000

Why op is "==" and rhs is "0" and not "~" and "dep" or "som", respectively?


3) Finally, I don't understand the difference with the index modification function (modificationindices).
Why does it not give the result of the "outcome ~ variable" regression but only of the "variable ~ outcome" regression?
For other relationships, should the two functions give the same results (I have the impression that this is not the case...)?

Thanks again really for your help

Terrence Jorgensen

unread,
Oct 11, 2018, 6:09:29 AM10/11/18
to lavaan
  lavaan WARNING: se is not `standard'; not implemented yet; falling back to ordinary score test

What does it mean and, above all, is it problematic?

It means the statistic is not robust, so you cannot trust the p value.  You are better off actually estimating the augmented model and calculating the LRT or Wald test, both of which are capable of offering a robust statistic.

2) I am interested in the direct effects between my latent variable (and the items that constitute them) and my outcome (aq_comport_tcstatutglobal_i). Here is an example of my results with two latent variables :                     

                                                   lhs op rhs     X2 df p.value
1     aq_comport_tcstatutglobal_i~dep ==   0  0.000  1   1.000
2     aq_comport_tcstatutglobal_i~som ==   0 29.743  1   0.000

Why op is "==" and rhs is "0" and not "~" and "dep" or "som", respectively?

Because those are the hypothesized constraints in your fitted model that you are testing.  (i.e., are those slopes == 0?)  You are not testing a parameter, you are testing a constraint, and your constraint could (possibly) be that the parameter == 1 (or equaled the value of another estimated parameter).

3) Finally, I don't understand the difference with the index modification function (modificationindices). 
Why does it not give the result of the "outcome ~ variable" regression but only of the "variable ~ outcome" regression?
For other relationships, should the two functions give the same results (I have the impression that this is not the case...)?

Modification indices are just 1-parameter score tests for all the fixed(-to-zero) parameters in your fitted model that would be reasonable to release.  It is not easy to program "reasonability" as a default behavior, so sometimes modindices() can seem to suggest freeing parameters that make no sense.

Emmanuel W

unread,
Oct 24, 2018, 11:01:19 AM10/24/18
to lavaan
Thanks again Terence!

But with regard to the warning, I didn't understand if it concerned the total score test or all the individual p-values (each univariate score test). Furthermore, in order to reduce the risk of including significant direct effects related to multiple testing, we would like to keep only the significant direct effects of variables with a modification index greater than 10. Thus, I would only need the Chi-square value of my univariate tests...What would be the appropriate parameter to use under these conditions?

Emmanuel

Terrence Jorgensen

unread,
Oct 26, 2018, 5:13:40 AM10/26/18
to lavaan
I didn't understand if it concerned the total score test or all the individual p-values (each univariate score test).

Both.  They are all from the same model fit to the same data, so they would all need to be robust (but there is not a straight-forward way to do that without actually fitting a less restricted model to check whether an effect should be added).

Furthermore, in order to reduce the risk of including significant direct effects related to multiple testing, we would like to keep only the significant direct effects of variables with a modification index greater than 10. Thus, I would only need the Chi-square value of my univariate tests...What would be the appropriate parameter to use under these conditions?

No, modification indices are chi-squared statistics, not standardized effect sizes.  An arbitrary cutoff of "10 is large" cannot even be applied across different sample sizes, much less when the values don't even follow a chi-squared distribution.  A robust statistic has a different value, so applying 10 to a robust and nonrobust value wouldn't mean the same thing.  If you are interested in an effect-size approach, that is what the EPCs are for.



Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Reply all
Reply to author
Forward
0 new messages