I seem to be running into an issue getting model fit stats (including log-likelihood) when modeling large datasets...I thought it was just a recent bug, but I've tried downgrading (back to v1.43). This is across multiple large datasets (it doesn't appear to be an issue with the data, just with the number of items: below, 820 items works; 830 does not (138 participants -- not a huge matrix). Is this a known memory issue? Should I be worried that the models are not actually fitting? Happens with Rasch, 2PL, 3PL..
> mod_1pl <- mirt(d_mat[,1:820], 1, itemtype='Rasch', verbose=TRUE,
+ technical=list(NCYCLES=4000),
+ prior=list(d = 'norm(0, 4)'))
Iteration: 44, Log-Lik: -47626.457, Max-Change: 0.00010
> mod_1pl
Call:
mirt(data = d_mat[, 1:820], model = 1, itemtype = "Rasch", verbose = TRUE,
technical = list(NCYCLES = 4000), prior = list(d = "norm(0, 4)"))
Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 44 EM iterations.
mirt version: 1.46.1
M-step optimizer: nlminb
EM acceleration: Ramsay
Number of rectangular quadrature: 61
Latent density type: Gaussian
Log-likelihood = -47626.46
Estimated parameters: 821
AIC = 96894.91
BIC = 99298.19; SABIC = 96700.82
G2 (9999999179) = 93892.99, p = 1
RMSEA = 0, CFI = NaN, TLI = NaN
Seems to fit, but doesn't print correctly (also NaN in mod_1pl@Fit and logLik(mod_1pl)):
> mod_1pl <- mirt(d_mat[,1:830], 1, itemtype='Rasch', verbose=TRUE,
+ technical=list(NCYCLES=4000),
+ prior=list(d = 'norm(0, 4)'))
Iteration: 33, Log-Lik: -47430.472, Max-Change: 0.00010
> mod_1pl
Call:
mirt(data = d_mat[, 1:830], model = 1, itemtype = "Rasch", verbose = TRUE,
technical = list(NCYCLES = 4000), prior = list(d = "norm(0, 4)"))
Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 33 EM iterations.
mirt version: 1.46.1
M-step optimizer: nlminb
EM acceleration: Ramsay
Number of rectangular quadrature: 61
Latent density type: Gaussian
Log-likelihood = NaN
Estimated parameters: 831
AIC = NaN
BIC = NaN; SABIC = NaN