I am calibrating a large pool of items with substantial data MCAR. I have no trouble estimating a one-factor 2PL or a one-factor 3PL if I set priors for the guessing parameter, but I cannot get the three-factor 3PL to converge after 500 iterations even with priors set. I've tried fixing the guessing parameters to the values that were estimated in the one-factor model, but I don't think I'm doing it correctly. The model still isn't converging and when I look at the estimates, the guessing parameters don't match the values I fixed them to.
Could you please take a look at my syntax and let me know if I've made an error somewhere? Thanks!
cmodel <- mirt.model('
F1 = 1-197
F2 = 198-383
F3 = 384-554
COV = F1*F2*F3')
guessfix = c(0.22, ..[554 values]..., 0.13)
cmod3F3PLfix <- mirt(mirtData, cmodel, method = 'QMCEM', SE = TRUE,
guess = guessfix)
--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Thank you for the advice, Phil. Even fixing the guessing parameters and passing the 2PL estimates as starting values, I still can't get the 3PL to converge in 500 iterations. Just watching the max change values, it doesn't look like increasing the number of iterations would help all that much. The solution appears to be getting stuck in a local maximum.
Using the 2PL, my fit statistics suggest that the 3-factor model fits better than the 1-factor model, but the factors are highly correlated. I assume this is why my 3PL won't converge. Do you have any guidance on factor correlations when choosing between MIRT and unidimensional IRT? It seems that if the the factors are too highly correlated that the complexity of MIRT estimation might not be worth the extra effort.
Hello again, Phil
I really appreciate all the help that you've provided so far. I have two more questions and then hopefully I won't need to bug you anymore. I have asked two of my psychometrician colleagues about this first issue and we are all stumped:
I ran my model of 545 items and 3 factors (each item loading on one factor and correlated factors) using both 2PL and 3PL. As I've mentioned, the 2PL converged easily, but my 3PL can only get down to a max change of about .00064. The factor correlations in the 2PL were .75, .78, and .85. The factor correlations in the 3PL were .95, .98, and .999. I cannot think of a reason why the estimation of the guessing parameter would affect the factor correlations so much, but I thought you might have some idea.
Second, do you happen to know if R-MIRT max-change is the same as the convergence criterion used in BILOG? I would assume it is, but wanted to check.