mirt is very flexible program and there is the option to control for latent group differences or not when someone examines DIF. So what is the rationale for controlling for latent group differences or not, and what is the effect on DIF results and item parameters?
--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Phil
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
Dear Phil,Thank you so much for your response. I have two additional questions. In applied research we often don't know in advance which items to use as anchor items. So I am wondering which is the proper way to locate anchor items using mirt? I am thinking of the following approachmod<-multipleGroup(dat, 2, group, invariance=c("free_means", "free_var))
dif.results<-DIF(mod, c("a1", "d")),
Am I right?In addition in the mirt manual for DIF (page 24) you don't estimate the mean-variance of the focal group.model <- multipleGroup(dat, 1, group, SE = TRUE)#test whether adding slopes and intercepts constraints results in DIF. Plot items showing DIFresulta1d <- DIF(model, c('a1', 'd'), plotdif = TRUE)resulta1dIs there a reason for doing so?
Thanks a lotNikos
On Friday, May 26, 2017 at 5:42:52 PM UTC+3, Phil Chalmers wrote:DIF is defined asP(y | G1, θ) ≠ P(y | G2, θ)Or, the probability of the response in both groups at the same θ level is not the same. The "at the same θ level" part is the reason for estimating the mean-variance of the focal groups. You wouldn't conclude there is bias in an item if one group happens to have higher ability than another group (e.g., a first year high-school math class versus a second year high-school math class on the same test. You would expect the second year to have higher ability....but that doesn't mean bias).Freeing the hyper-parameters effectively changes the metric of the item parameters by placing the global differences in the hyper parameters rather than the IRT parameters. If there were a sufficient number of anchor items used, then the remainder of the IRT parameters will theoretically be on the same scale across groups (not perfectly, but they should be unbiased, enough to statistically test for bias).PhilOn Fri, May 26, 2017 at 5:13 AM, 'Nikos Tsigilis' via mirt-package <mirt-p...@googlegroups.com> wrote:mirt is very flexible program and there is the option to control for latent group differences or not when someone examines DIF. So what is the rationale for controlling for latent group differences or not, and what is the effect on DIF results and item parameters?--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.