Model and Item Fit Using pars

145 views
Skip to first unread message

Leigh

unread,
Feb 27, 2019, 3:17:08 AM2/27/19
to mirt-p...@googlegroups.com
Hi Phil and MIRT Community, 

I am trying to fix some of the parameters in one of my models because I'd like to test parameters from a previous model on new data.  I saw that you can set the est column to FALSE and the parameters won't change even when you run a new model.  When I ran the second model with the parameters from the first model, the M2 values change drastically, and I do not know why.  Any suggestions on why this is happening?  I noticed that in mod, df=1 and in mod2, df=36.
 
data <- read.csv( "ItemFit_TestData.csv")
data <- data[,c(2:11)]
data <- na.omit(data)

ItemType <- c("2PL", "2PL", "graded","graded","graded","graded","graded", "graded")

mod<- mirt(data[,c(3:ncol(data))],
                        model = 1,
                        itemtype=ItemType,
                        method = "EM", 
                        survey.weights = data$`Individual_MIRTData$V258.x`)
summary(mod)
mod@Fit
M2(mod, type="M2*") # the model seems to have decent global fit values RMSEA = 0.09, CFI=0.99, TLI =0.94
itemfit(mod) #none of the items seem to fit

itemfit(mod, empirical.plot = 1) #visually the items seem to fit
itemfit(mod, empirical.plot = 3) #visually the items seem to fit
itemfit(mod, empirical.plot = 5) #visually the items seem to fit

values<- mod2values(mod) #made this to determine how many parameters I needed
values$est <- FALSE #so that model uses exact same parameters

mod2<- mirt(data[,c(3:ncol(data))],
            model = 1,
            itemtype=ItemType, 
            pars=values,
            survey.weights = data$`Individual_MIRTData$V258.x`)

mod2@Fit #same values as mod
M2(mod2, type="M2*") #now the df is 36 instead of 1
itemfit(mod2)
itemfit(mod2, empirical.plot = 1)

The other slightly confusing part of this example is the difference between the item fit and the model fit.  The item fit visually looks good, but the S_X2 values imply poor fit.  What would cause this difference? 

Any help would be appreciated!

Thank you!
Leigh
ItemFit_TestData.csv

Phil Chalmers

unread,
Feb 27, 2019, 1:38:15 PM2/27/19
to Leigh, mirt-package
Setting $est all to FALSE implies that no parameters are estimated in the model; hence, you have many more degrees of freedom because you are assuming these parameters are actually fixed and known a priori. As statistics like M2 are based on an orthogonal compliment from the estimated parameters, setting all to FALSE is really not true since you really meant for these elements to be estimated an not known. Ultimately, this just means that you're implying less sampling variability than there really was. 

If you insist on doing things like this, try not setting $est to FALSE and just leave these as is. But, pass TOL = NaN or TOL = NA to force the model to instantly converge (the differences between these two is whether you want the log-likelihood to be evaluated or not). Then, M2 and other fit stats should work more as expected. HTH. 

Phil


On Wed, Feb 27, 2019 at 3:17 AM Leigh <laal...@gmail.com> wrote:
Hi Phil and MIRT Community, 

I am trying to fix some of the parameters in one of my models.  I saw that you can set the est column to FALSE and the parameters won't change even when you run a new model.  When I ran the second model with the parameters from the first model, the M2 values change drastically, and I do not know why.  Any suggestions on why this is happening?  I noticed that in mod, df=1 and in mod2, df=36.

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Leigh

unread,
Feb 27, 2019, 3:46:38 PM2/27/19
to mirt-p...@googlegroups.com
Hi Phil!

Thank you for the clarification! I will use TOL=NA.  

Do you have any thoughts on the difference between empirical item plot appearing to fit the data, but a significant item fit from M2() using S_X2? 

I suspect sparseness because there are so many potential item response patterns, but even when I reduce the model to a smaller subset of items to improve the frequencies. I still am unable to show good item fit. 

Example:
freq <- table(data[,c(3:5)])

mod7<- mirt(data[,c(3:5)],
            model = 1,
            itemtype=c("2PL", "2PL", "graded"),
            method = "EM",
            survey.weights = data$`Individual_MIRTData$V258.x`)

mod7@Fit

M2(mod7,type = "M2*")

itemfit(mod7, which.items = 1, empirical.plot = 1)
itemfit(mod7,which.items = 1,fit_stats = "S_X2")


Best, 
Leigh

Keri Simmons

unread,
Feb 27, 2019, 5:01:41 PM2/27/19
to Leigh, mirt-package
Hi Leigh, 
You want a non-significant S-X2 as it indicates item good fit. 

Cheers!

On Feb 27, 2019, at 3:46 PM, Leigh <laal...@gmail.com> wrote:

Hi Phil!

Thank you for the clarification! I will use TOL=NA.  

Do you have any thoughts on the difference between empirical item plot appearing to fit the data, but a non-significant item fit from M2() using S_X2? 

--

Leigh

unread,
Feb 27, 2019, 6:36:22 PM2/27/19
to mirt-package
Hi Keri!

Thanks! I made typo.  All my items have p.values of 0.

Best,
Leigh 

Keri Simmons

unread,
Feb 27, 2019, 6:55:31 PM2/27/19
to Leigh, mirt-package
Do you have a large sample? If so could be that small differences in expected vs observed response freqs are causing significant p values even though magnitude of misfit is not that big per empirical item plots (and S-X2 RMSEA)
Reply all
Reply to author
Forward
0 new messages