Negative TLI and CFI of 0 Using C2

320 views
Skip to first unread message

Jake Kraska

unread,
Aug 27, 2018, 8:56:29 AM8/27/18
to mirt-package


Hi Dr Chalmers

I am having some trouble interpreting the following outcomes from the M2 analysis (have attached example data):

# Prep Data
data
<- read.csv("example.csv")
set.seed(145)
ind
<- sample(c(rep(TRUE,ceiling(nrow(data)*0.5)),rep(FALSE,floor(nrow(data)*0.5))))
dep
.items <- data
dep
.evaluation <- dep.items[ind, ]
dep
.validation <- dep.items[!ind, ]

# RMSEA = .096, CFI = .972, TLI = .959, SRMR = .027, AIC = 5118.250, BIC = 5202.793
dep
.model <- 'dep =~ DEP1+DEP2+DEP3+DEP4+DEP5+DEP6+DEP7'
dep
.fit <- cfa(dep.model, data=dep.evaluation, std.lv=TRUE, missing="fiml")
fitMeasures
(dep.fit, c("aic","bic","chisq", "df", "pvalue", "cfi","tli","rmsea", "srmr"))

# Mokken
coefH
(dep.evaluation) # no items with low H, Scale H = .699
aisp
(dep.evaluation) # all items = 1

# IRT
dep
.pcm <- mirt(dep.evaluation, model=1, type="graded")
M2
(dep.pcm, type = "C2") # RMSEA 0.078, SRMSR 0.039, TLI -1.151, CFI 0.000


I have installed the latest version of mirt() from the GitHub repository.
example.csv

Phil Chalmers

unread,
Aug 27, 2018, 1:20:40 PM8/27/18
to jake....@gmail.com, mirt-package
Did you ignore this warning message?

Warning message:
The following items have a large number of categories which may cause estimation issues: 1 

Use 

apply(dep.evaluation, 2, table) 

to see the table of counts. 

Also, cfa() is not fitting the same model as mirt. You need to specify that the indicator variables are ordinal in order to lavaan to estimate a categorical factor model.

Phil


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jake Kraska

unread,
Aug 27, 2018, 10:30:07 PM8/27/18
to mirt-p...@googlegroups.com
Hi Dr Chalmers

My apologies. When I first received your response I was very confused because I had never seen that warning message. But now I have realised that when I saved the example data I forgot to add
row.names=FALSE

Thanks for the tip regarding the CFA.

I have reattached the data without that error and I hope this better shows where I am at.

> data <- read.csv("example.csv")
> set.seed(145)
> ind <- sample(c(rep(TRUE,ceiling(nrow(data)*0.5)),rep(FALSE,floor(nrow(data)*0.5))))
> dep.items <- data
> dep.evaluation <- dep.items[ind, ]
> dep.validation <- dep.items[!ind, ]
>
> # RMSEA = .054, CFI = .999, TLI = .999, SRMR = .031
> apply(dep.evaluation,2,table)
  DEP1 DEP2 DEP3 DEP4 DEP5 DEP6 DEP7
0  281  168  278  203  248  278  298
1   96  165   84  141  117   90   74
2   26   55   34   49   36   29   27
3   11   26   18   21   13   17   15
> dep.model <- 'dep =~ DEP1+DEP2+DEP3+DEP4+DEP5+DEP6+DEP7'
> dep.fit <- cfa(dep.model, data=dep.evaluation, std.lv=TRUE, ordered=c("DEP1","DEP2","DEP3","DEP4","DEP5","DEP6","DEP6","DEP7"), missing="fiml")
Warning message:
In lav_options_set(opt) :
  lavaan WARNING: information will be set to expected for estimator = DWLS
> fitMeasures(dep.fit, c("chisq", "df", "pvalue", "cfi","tli","rmsea", "srmr"))
 chisq     df pvalue    cfi    tli  rmsea   srmr
30.955 14.000  0.006  0.999  0.999  0.054  0.031
> 
> coefH(dep.evaluation) # no items with low H, Scale H = .699
$`Hij`
     DEP1    se      DEP2    se      DEP3    se      DEP4    se      DEP5    se      DEP6    se      DEP7    se    
DEP1                  
0.730  (0.043)  0.735  (0.042)  0.750  (0.044)  0.782  (0.034)  0.639  (0.052)  0.695  (0.046)
DEP2  
0.730  (0.043)                  0.698  (0.043)  0.582  (0.047)  0.727  (0.042)  0.637  (0.050)  0.623  (0.048)
DEP3  
0.735  (0.042)  0.698  (0.043)                  0.816  (0.028)  0.705  (0.044)  0.675  (0.045)  0.793  (0.040)
DEP4  
0.750  (0.044)  0.582  (0.047)  0.816  (0.028)                  0.662  (0.049)  0.723  (0.040)  0.788  (0.041)
DEP5  
0.782  (0.034)  0.727  (0.042)  0.705  (0.044)  0.662  (0.049)                  0.584  (0.052)  0.668  (0.048)
DEP6  
0.639  (0.052)  0.637  (0.050)  0.675  (0.045)  0.723  (0.040)  0.584  (0.052)                  0.702  (0.046)
DEP7  
0.695  (0.046)  0.623  (0.048)  0.793  (0.040)  0.788  (0.041)  0.668  (0.048)  0.702  (0.046)                

$Hi
     
Item H  se    
DEP1  
0.721 (0.031)
DEP2  
0.663 (0.033)
DEP3  
0.737 (0.026)
DEP4  
0.716 (0.029)
DEP5  
0.687 (0.034)
DEP6  
0.661 (0.035)
DEP7  
0.713 (0.032)

$H
 
Scale H se    
   
0.699 (0.026)

> aisp(dep.evaluation) # all items = 1
     0.3
DEP1  
1
DEP2  
1
DEP3  
1
DEP4  
1
DEP5  
1
DEP6  
1
DEP7  
1
> 
> dep.pcm <- mirt(dep.evaluation, model=1, type="graded")
Iteration: 45, Log-Lik: -2099.638, Max-Change: 0.00009
> M2(dep.pcm, type = "C2") # RMSEA 0.078, SRMSR 0.039, TLI -1.151, CFI 0.000
            M2 df            p      RMSEA    RMSEA_5  RMSEA_95     SRMSR       TLI CFI
stats
48.84321 14 9.524931e-06 0.07762829 0.05454044 0.1017721 0.0391606 -1.150941   0

Regards

Jake
example.csv

Seongho Bae

unread,
Aug 29, 2018, 9:56:44 AM8/29/18
to mirt-package
Hi. That seems misspecificated. Confirmatory item factor analysis requires the WLSMV estimator in structural equation modelling perspectives.

Seongho

Phil Chalmers

unread,
Aug 29, 2018, 10:40:10 PM8/29/18
to Jake Kraska, mirt-package
Thanks, I was able to reproduce the problem. This situations is interesting, in that I'm not entirely sure why CFI and TLI are behaving this way. When inspecting the lower-level components the residual differences are quite large (difference between the observed and expected moments are quite different, particularly the second-order moments), but when formed into the C2 statistic via the quadratic form these differences are somehow "washed out". So, the null model looks to fit noticeably better than I would have expected. E.g.,

sv <- mirt(dep.evaluation, model=1, pars='values')
sv$value[sv$name == 'a1'] <- 0
sv$est[sv$name == 'a1'] <- FALSE
dep.pcm.null <- mirt(dep.evaluation, model=1, pars=sv)
M2(dep.pcm.null, calcNull = FALSE) 
M2(dep.pcm.null, calcNull = FALSE, type = 'C2')

It's possible this is somehow a bug with C2, so I'll keep looking into the issue. For now, I would probably ignore these two measures for your data as they don't look reliable. 

Phil


On Mon, Aug 27, 2018 at 10:30 PM Jake Kraska <jake....@gmail.com> wrote:
Hi Dr Chalmers

My apologies. When I first received your response I was very confused because I had never seen that warning message. But now I have realised that when I saved the example data I forgot.

Thanks for the tip regardign the CFA.

Jake Kraska

unread,
Aug 30, 2018, 6:48:18 AM8/30/18
to rphilip....@gmail.com, mirt-p...@googlegroups.com
Thanks Phil. That's helpful; I thought something weird was going on and it is good to see you confirm that.

Ill keep an eye out for updates to the package.

Jake

Phil Chalmers

unread,
Aug 30, 2018, 5:25:54 PM8/30/18
to Jake Kraska, mirt-package
I located the issue with the C2 values here, and it looks like there were a few missing off-diagonal components in the collapsed moments of the data matrix prior to computing the quadratic form fit statistic. The fix is on the dev version on Github. I now get the following with your code, which appears more reasonable (it looks like the IRT model fits worse in comparison to your SEM version):

           M2 df           p                    RMSEA    RMSEA_5   RMSEA_95     SRMSR       TLI       CFI
stats 34.0284 14 0.002042812 0.0588551 0.03382035 0.08421807 0.0391606 0.6856893 0.7904596

So the comparison to the null model is a little more pessimistic than the SEM approximation (this is a trend I've seen with M2* as well). Thanks again for the report. 

Phil

Phil Chalmers

unread,
Aug 30, 2018, 5:41:55 PM8/30/18
to Jake Kraska, mirt-package
Sorry, that was a previous dev build output (my mistake, I'm using Windows right now and am less familiar with the package dev flow). Here's the correct output from the most recent dev, which matches what you are seeing from lavaan.

> M2(dep.pcm, type = "C2") # RMSEA 0.078, SRMSR 0.039, TLI -1.151, CFI 0.000
               M2 df                          p      RMSEA     RMSEA_5   RMSEA_95     SRMSR       TLI            CFI
stats 37.91612 14 0.0005352327 0.06431411 0.04009687 0.08924405   0.0391606 0.9374 0.9582667

Thanks again.

Phil

Jake Kraska

unread,
Aug 31, 2018, 12:19:51 AM8/31/18
to Phil Chalmers, mirt-p...@googlegroups.com
Thanks Phil. I really appreciate you taking the time to look into this.

Jake

Phil Chalmers

unread,
Sep 18, 2018, 9:45:07 AM9/18/18
to Jake Kraska, mirt-package
Hi Jake, 

It looks like another small tweak was required for C2, which has now been patched. After running this new form with a simulation it looks like all the Type I error rates are behaving as expected, and the outputs seem to match the flexMIRT program (so I'm told; I don't actually use that software). Given these recent updates, here's what I get in your example:

M2(dep.pcm, type = "C2")
                   M2 df           p      RMSEA    RMSEA_5  RMSEA_95     SRMSR       TLI       CFI
stats 50.25006 14 5.54462e-06 0.07917997 0.05617787 0.1032509 0.0391606 0.9842697 0.9895131

This should be the last change required in this fit statistic, and thanks for being patient. 

Phil

Jake Kraska

unread,
Sep 18, 2018, 9:56:55 AM9/18/18
to Phil Chalmers, mirt-package
Thanks for the follow up Phil.


DEP4  
0.750<span style="color:#000" class="m_-8444015118030415444m_-2371304073345106593m_-7592326098837701606m_4520672164120949
Reply all
Reply to author
Forward
0 new messages