Hi,
I'm experiencing an unexpected behavior with mirt where changing only the data ordering produces different item parameter estimates, even though the code and dataset are identical. Additionally, running the same code with the same ordering on different computers also produces different estimates.
Context:
> Using mirt for 3PL IRT calibration with educational assessment data
> Model specification includes priors for discrimination and guessing parameters
> The same ordering always produces same results ion the same computer/R environment
> However, different orderings produce different item parameter estimates (a, b, c), which consequently lead to different theta estimates
> Also, the same code and data ordering produce different estimates on different computers
>Both models converge successfully
Code:
dados_calibracao <- dados_brutos %>%
arrange(CO_INSCRICAO) %>% # <- Only difference
filter(TP_PRES == 555)
Questions:
1. Is this behavior expected? Could it be due to how initial values are computed from the data?
2. Are the different solutions statistically equivalent (e.g., different local maxima with similar fit)?
3. Is there a recommended approach to ensure stable and reproducible item parameter estimates regardless of data ordering or computing environment (e.g., manually setting starting values)?
Thank you for any insights!
Best Regards,
Cecilia Fiorini
AIC SABIC HQ BIC logLik X2 df p
mod1 3249.739 3262.512 3274.922 3313.279 -1608.87
mod2 3249.739 3262.512 3274.922 3313.279 -1608.87 0 0 NaN
--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mirt-package/c8411693-2585-4182-8959-dbaa4ec7f196n%40googlegroups.com.