Weighting Items for Calculating Theta

67 views
Skip to first unread message

smi...@rowan.edu

unread,
Sep 24, 2020, 6:36:46 PM9/24/20
to mirt-package
Greetings,

I am wondering if anyone has found a way to weight items differently when calculating theta. For example, I have a data set in which several of the items should be grouped together and count as much as any of the other single items. Is it possible to give each of those a fraction of the influence when calculating the theta values? 

Thanks for any help!
Trevor Smith

Matthias von Davier

unread,
Sep 24, 2020, 9:41:51 PM9/24/20
to smi...@rowan.edu, mirt-package
For the Rasch Model it is called the OPLM - Verhelst described it. For models with estimated slopes, if you still estimate unconstrained individual slope parameters any weight goes away if slopes are not constrained to a certain (log) average within these groups you want with same average weight. 

So if mirt allows constraints on subsets of items to be imposed on slopes it could work. 

Or just fix slopes to the weights content experts come up with. 

Or just estimate IRT models separately for item groups, and compute a weighted composite score like NEAP does.

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/b7468a5c-0ca7-4a07-95c3-4f50d78f65bbn%40googlegroups.com.

Phil Chalmers

unread,
Sep 25, 2020, 4:18:04 PM9/25/20
to smi...@rowan.edu, mirt-package
Echoing Matthias' points, it really does depend on how you want to tackle this problem and what can be deemed reasonable. The integer approach to re-weight the slopes in OPLM (and as was noted, other packets of items which do not have the Rasch-type flavour) is somewhat attractive since you can effectively make use of trait estimators such as EAP, MAP, ML, etc, because the likelihood functions are still readily available; of course, these should only be used if these models actually fit the data. 

On the other hand, if you want to include manual weights, and don't necessarily require the likelihood/Bayesian sampling variability information, then you can always fall back to more ad-hoc estimators such as the (weighted) least squares family. In this case you can focus on low-level model implied information, such as the expected probability/score functions, and compare these to the observed responses so as to criteria that "best matches" the observations given the squared discrepancy. Below I show how to estimate a given response pattern using ULS and WLS, where weights are chosen to give a more 'expert weight' flavour to the last two items in the test:

############
library(mirt)
dat <- expand.table(LSAT7)

mod <- mirt(dat, 1)

# target pattern
pat <- c(0,1,1,0,1)

# ML estimate
fscores(mod, response.pattern = pat, method = 'ML')


items <- lapply(1:5, function(i) extract.item(mod, i))
score <- function(items, theta) sapply(items, function(x)
expected.item(x, matrix(theta)))
score(items, theta = 0)
score(items, theta = 1)
score(items, theta = -.25)

#---------------------------
# ULS
uls <- function(theta, pat) sum( (pat - score(items, theta))^2 )
uls(theta = 1, pat=pat)

optimize(uls, interval = c(-15, 15), pat=pat)


#---------------------------
# WLS
w <- c(1,1,1,2,2)
w <- w / sum(w)

wls <- function(theta, pat) sum( w * (pat - score(items, theta))^2 )
wls(theta = 1, pat=pat)

optimize(wls, interval = c(-15, 15), pat=pat)

##############

Phil


smi...@rowan.edu

unread,
Sep 25, 2020, 4:22:35 PM9/25/20
to mirt-package
Thank you, Mathias for these excellent ideas. And thank you, Phil for the example using mirt. This is all really wonderful. I will try this out and see where it takes me.

Trevor

Reply all
Reply to author
Forward
0 new messages