full.scores.SE=TRUE in fscore()

163 views
Skip to first unread message

Michael Zhu

unread,
Jul 31, 2023, 11:42:11 AM7/31/23
to mirt-package
Hi Phil,

I am running an multidimensional IRT model using mirt() and calculating examinees' theta scores and SE on each dimension using fscores(sdt.mirt.201,QMC=TRUE,full.scores.SE=TRUE,method="ML"). However, the SE is quite big compared to the range of thetas. For example, in one form theta scores are between -2 to 6 in one dimension, and the corresponding SE average is 0.3.

Could you please advisor 1) if I used full.scores.SE=TRUE correctly?  2) how to interpret the SEs here? Are these SE on a different scale from theta? 3) is there a way to extract the variance/covariance matrix to calculating thetas? I tried extract.mirt(xx,"Prior"), but the results don't look like what we are looking for. 

Thanks a lot in advance.

Best,
Michael

Phil Chalmers

unread,
Aug 2, 2023, 11:29:53 AM8/2/23
to Michael Zhu, mirt-package
Hi Michael,

ML estimates of θ are generally much wider than the Bayes cousins since they depend only on the vector of observations and model to inform the estimates. For multidimensional models this can be a big issue as well since if very few items load on a given factor then there will be very little information about the θ estimate (often less than the total number of items). I suspect that is what is happening in your case; you may just have weakly informed factors, which in turn lead to weakly informed factor score estimates. HTH.

Phil


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/fcdfd6e0-bad1-4ce5-80b1-4fd6d14e604cn%40googlegroups.com.

Min Zhu

unread,
Aug 6, 2023, 11:28:49 PM8/6/23
to Phil Chalmers, mirt-package
Hi Phil,

Thank you very much for the response. Two more questions:
1) We have over 150 items loaded on each factor, but there are crossloadings. Does the large SE generally mean the model itself is problematic?
2) I tried the jackknife simulation, and calculated the SDs on each theta for each examinee. The SDs are very small at 0.00x level. Do you mind directing me where to find how full.score.SE is calculated in MIRT?

Greatly appreciate all your help.

Best,
Michael

Phil Chalmers

unread,
Aug 9, 2023, 4:08:49 PM8/9/23
to Min Zhu, mirt-package
Hi Michael,

That doesn't sound unreasonable to me (precise measurement of latent variables is hard). The SEs themselves are just a function of the inverse of the observed information function at the MLE. Note that if you have lower discrimination/slope parameters this can happen to an even worse degree. Consider the following example, which gives mean(SE) ~= .47.  This is partially why computerized adaptive testing is so useful in reducing the length of a test as a function of the size and information available in an item bank. HTH.

Phil

###
library(mirt)
a <- rlnorm(150, -1, .3)
d <- rnorm(150, sd=2)

mean(a)

dat <- simdata(a, d, N=5000, itemtype = '2PL')
(mod <- mirt(dat))
fs <- fscores(mod, full.scores.SE = TRUE)
round(colMeans(fs), 3)
####

Reply all
Reply to author
Forward
0 new messages