A Question on Identifiability of Mirt models

255 views
Skip to first unread message

Tobias Konitzer

unread,
Feb 11, 2015, 11:05:15 AM2/11/15
to mirt-p...@googlegroups.com
Hi Phil,

I have been working a lot on identifiability of IRT models and how mirt handles identifiability. According to a seminal paper in my discipline (http://polmeth.wustl.edu/media/Paper/river03.pdf), IRT models are identified by imposing d*(d+1) independent restrictions on parameters, where d is the number of dimensions. So I have four sample cases of 2PL IRTs and would like to have your take on this:

1) One-dimensional mirt model is identified by setting latent mean to 0 and latent variance to 1, so it has 2 restrictions, and 1*(1+1)=2, so model is identified (up to a sign-shift)
2) 2-dimensional mirt model is identified by setting two latent means to zero, two latent variances to 1, and covariance between latent scores to 0, it has 5 restrictions, but 2(*2+1)=6, so the model should not be sufficiently identified although it runs in mirt by apparently setting one item parameter to 0 and hence satisfying identification restrictions, could you comment on this?
3) 3-dimensionanl mirt model where I fix three latent means and three latent variances, but want to estimate covariance freely is not identified in this form because 6<d*(d+1)
4) 2-dimensional mirt model where I constrain 7 out of 14 discrimination parameters per dimension to be zero should be identified with free variance, free covariance and free latent mean because 14>d(d+1).

I am especially not sure about the last one. Am I right here?

All the best,
Tobi


Phil Chalmers

unread,
Feb 14, 2015, 3:07:48 AM2/14/15
to Tobias Konitzer, mirt-package
On Wed, Feb 11, 2015 at 5:05 PM, Tobias Konitzer <tobias....@gmail.com> wrote:
Hi Phil,

I have been working a lot on identifiability of IRT models and how mirt handles identifiability. According to a seminal paper in my discipline (http://polmeth.wustl.edu/media/Paper/river03.pdf), IRT models are identified by imposing d*(d+1) independent restrictions on parameters, where d is the number of dimensions. So I have four sample cases of 2PL IRTs and would like to have your take on this:

1) One-dimensional mirt model is identified by setting latent mean to 0 and latent variance to 1, so it has 2 restrictions, and 1*(1+1)=2, so model is identified (up to a sign-shift)
2) 2-dimensional mirt model is identified by setting two latent means to zero, two latent variances to 1, and covariance between latent scores to 0, it has 5 restrictions, but 2(*2+1)=6, so the model should not be sufficiently identified although it runs in mirt by apparently setting one item parameter to 0 and hence satisfying identification restrictions, could you comment on this?

This is what mirt does too. Try using coef(mod, rotate = 'none') on an exploratory item factor analysis model.
 
3) 3-dimensionanl mirt model where I fix three latent means and three latent variances, but want to estimate covariance freely is not identified in this form because 6<d*(d+1)
4) 2-dimensional mirt model where I constrain 7 out of 14 discrimination parameters per dimension to be zero should be identified with free variance, free covariance and free latent mean because 14>d(d+1).

I am especially not sure about the last one. Am I right here?

Yes the last one is fine.

Phil 

All the best,
Tobi


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tobias Konitzer

unread,
Feb 16, 2015, 10:14:54 AM2/16/15
to mirt-p...@googlegroups.com, tobias....@gmail.com
Hi Phil.

So then could you briefly comment on this? I have a 2 PL IRT, 14 items ,where the first 7 items (traits) are only allowed to load onto dimension 1, and the second 7 items are only allowed to load onto dimension 2. If a 1-response is negative, a 0-response is positive, and the 2 dimensions describe two group sentiments, the question is which group is worse off. The way I see it, there is two ways to go about this:

1) (and we have separately discussed a similar approach): constrain latent group means and variances to 0, but free covariance, and constraint the discrimination parameters of the same traits to be equal across dimensions (e.g. set b for trait 1, laziness of group a, equal to b for trait 8, laziness of group b, repeat for all traits). Now intercepts can be compared directly or sum of intercepts can be compared via Wald-tests

2) I can simply free the latent means and compare latent means to get an understanding of which outgroup is worse off, because I have set a sufficient number of constraints.

Am I correct here?

Many thanks,
Tobi

Phil Chalmers

unread,
Feb 23, 2015, 6:21:57 AM2/23/15
to Tobias Konitzer, mirt-package
I think the 2) option sounds like what you want. The first one might work, but only if your items have equal slopes (like a Rasch model), otherwise you'd be comparing mixed weightings for the constructs. Cheers.

Phil
Reply all
Reply to author
Forward
0 new messages