Cross-validation

55 views
Skip to first unread message

William

unread,
Feb 2, 2025, 7:00:10 PMFeb 2
to mirt-package

Dear Dr Chalmers,

I would be grateful for your assistance with cross-validation with mirt and mirtCAT.

My aim is to create an mcat simulation for an adaptive test that estimates 3 different traits with 3 different standardised scales. The goal would be to get score estimates for the 3 variables with less questions. I have been able to do this all with one dataset, but I am now at the stage of wanting to construct the model on some of the data, and test it on new data.

These are the steps I am now following:

  1. Split up dataset into training data and testing data
  2. Fit a Mirt model with the training data, called mirt_model_Train
  3. Create mirtmod2values_Train <- mod2values(mirt_model_Train)
  4. Set mirtmod2values_Train$est <- FALSE
  5. Fit a new Mirt model with data = test_data, pars=mirtmod2values_Train… call this model mirt_model_TEST
  6. Use fscores() with mirt_model_TEST and data = test_data, and use generate_pattern() with mirt_model_TEST and these fscores…call this pattern ‘patterns_empirical’
  7. Run MCAT simulation called results_TEST with mirtcat(mo=mirtmodel_TEST, local_pattern = patterns_empirical….
  8. Using mirt_model_TEST, Extract results with expected.test etc… compare True vs expected scores for the test_data
  9. Re-do steps 6-8 except use the training data instead of test data.
  10. Compare the 2 simulation results. The first being the simulation created with the training data and tested on the test data, and then the second being the simulation created with the training data and tested on the training data. Compare MAE and RMSE.

Could you please clarify if my approach is methodologically sound? I’m unsure if this is the best way to solve my research problem.

One issue I am noticing is In step 5, the mirt_model_Test instantly converges (within 0.001 tolerance after 0 MHRM iterations, and estimates 0 parameters).

I can happily provide you with code privately if that is easier for you to examine.

Many thanks in advance,
Will


William

unread,
Feb 12, 2025, 8:42:02 PMFeb 12
to mirt-package
Hi Dr Chalmers,

Do you have any advice for my problem above? Any help you can provide will be gratefully appreciated.

Kind regards,
Will

Phil Chalmers

unread,
Feb 23, 2025, 11:28:17 AMFeb 23
to William, mirt-package
Hi Will,

I think you lost me at the factor scores component as I don't see why this would be necessary. The benefit, of course, is that you can compute the usual metrics for cross validation based on observed predictions, but the scores themselves are subject to shrinkage/bias effects so they wouldn't be my go-to statistics except for reasonably long tests. It's possible to do cross validation without such approaches that focus on the likelihood information itself, though I haven't implemented this anywhere in the package. Happy to add this to the long TODO list though...

Phil


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mirt-package/5cf9ea58-3420-4cbc-8228-77f6f4d23344n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages