testing for significant difference in item difficulty

46 views
Skip to first unread message

Balazs Klein

unread,
Aug 17, 2016, 6:39:41 AM8/17/16
to mirt-package
How could I test if the difficulty of two items (in a 2 parameter model) is significantly different?
The situation: I made some changes in some items and trialled it with a number of test takers. The item difficulty changed. How can I test if this change was significant?

Thanks for the help.
Balázs

Phil Chalmers

unread,
Aug 17, 2016, 7:19:11 AM8/17/16
to Balazs Klein, mirt-package
Use anova(mod1, mod2), where one model is the nested version of the other (i.e., more constrained).

Phil

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Balazs Klein

unread,
Aug 17, 2016, 9:29:23 AM8/17/16
to mirt-package, balazs...@gmail.com
Sorry, I don't think I understood your answer
Could you give an example?

library(mirt)
data <- expand.table(LSAT7)
mod1 <- mirt(data, 1)

coef(mod1)$Item.1
coef(mod1)$Item.3

Item1 -seems- to be more difficult than Item3 (d1=1.856, d3=1.804).
How do I know if this difference was significant?

Thanks for the help.
B.



On Wednesday, August 17, 2016 at 1:19:11 PM UTC+2, Phil Chalmers wrote:
Use anova(mod1, mod2), where one model is the nested version of the other (i.e., more constrained).

Phil

On Wed, Aug 17, 2016 at 6:39 AM, Balazs Klein <balazs...@gmail.com> wrote:
How could I test if the difficulty of two items (in a 2 parameter model) is significantly different?
The situation: I made some changes in some items and trialled it with a number of test takers. The item difficulty changed. How can I test if this change was significant?

Thanks for the help.
Balázs

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.

Seongho Bae

unread,
Aug 18, 2016, 7:02:03 AM8/18/16
to mirt-package, balazs...@gmail.com
Estimating SEs may help to know differences among difficulty.

mod1 <- mirt(data, 1, SE = T)
coef(mod1)$Item.1
coef(mod1)$Item.3

Seongho

2016년 8월 17일 수요일 오후 10시 29분 23초 UTC+9, Balazs Klein 님의 말:

Phil Chalmers

unread,
Aug 18, 2016, 1:41:12 PM8/18/16
to Seongho Bae, mirt-package, Balazs Klein
Seongho has the right idea here, though if you want a formal test with the ACOV matrix you can us wald(). Otherwise, you can apply equality constraints to do a likelihood-ratio test by fitting another model and comparing with anova(). See below.

#-------------
# Wald test
dat <- expand.table(LSAT7)
mod1 <- mirt(dat, 1, SE = T)
wald(mod1) #see parameters and location

# setup test matrix (test whether 2 and 6 are equal)
L <- matrix(0, 1, 10)
L[1,2] <- 1
L[1,6] <- -1
wald(mod1, L)

# ------------
# LR test 
# Constrain intercepts 1 and 3 to be equal, and test whether this makes the model fit worse

mod2 <- mirt(dat, 'F = 1-5
                   CONSTRAIN = (1, 3, d)')
anova(mod1, mod2)


Phil

To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.

Balazs Klein

unread,
Aug 19, 2016, 6:48:37 AM8/19/16
to mirt-package, seongh...@gmail.com, balazs...@gmail.com
Many thanks for this.
B.
Reply all
Reply to author
Forward
0 new messages