Significance for Modification Indices

92 views
Skip to first unread message

favstats

unread,
Aug 8, 2017, 10:16:35 PM8/8/17
to lavaan
Hello everyone! 

I hope this is an easy one but I can't fight it anywhere..

I am looking for a way to see the significance of my estimated Modification Indicies (i.e. the modificationindices(fit) function).
Is there a way to get the estimated p-values?

Thanks a lot in advance!

Edward Rigdon

unread,
Aug 8, 2017, 11:11:18 PM8/8/17
to lav...@googlegroups.com
They are distributed as chi square with 1 DF.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+unsubscribe@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.

favstats

unread,
Aug 8, 2017, 11:23:59 PM8/8/17
to lavaan
So how would I be able to get a p-value for a given modification index? I'm sorry if that sounds trivial

Terrence Jorgensen

unread,
Aug 9, 2017, 10:17:20 AM8/9/17
to lavaan
So how would I be able to get a p-value for a given modification index?

The "mi" column is the chi-squared value, so you can find its probability using pchisq().  As Ed said, they are 1-df tests, so the critical  value using alpha = 5% is 3.84 (without adjusting for multiple testing).  You should control for the number of tests you are doing:

library(lavaan)
example
(modindices)
MIs <- modindices(fit)
MIs$pvalue <- pchisq(MIs$mi, df = 1, lower.tail = FALSE) # naïve p value
## possibly save a subset of rows you are interested in, then...
MIs$bonf.p <- p.adjust(MIs$pvalue, method = "bonferroni") # keeps familywise Type I errors below alpha level
MIs$FDR.p <- p.adjust(MIs$pvalue, method = "fdr") # keeps false discovery rate below alpha level
MIs

A smarter, less overly conservative way to do so would be to consider ahead of time which modification indices you would be willing to free for theoretically justifiable reasons.  Then you can save that subset of rows of the modindices() output, and the adjustments to p values will not be so conservative as to lose all power.  Forethought about your tests is also a way to prevent inadvertently letting the data make decisions for you, which has been shown to consistently fail to lead researchers to a better-fitting model that also generalizes to new samples from the same population.

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Reply all
Reply to author
Forward
0 new messages