Multiple group analysis: free covs, vars and means?

103 views
Skip to first unread message

Dirk Pelt

unread,
Dec 7, 2017, 6:27:19 AM12/7/17
to mirt-package
Dear Phil,

I want to compare the covariance matrices of two groups in a multiple group framework, but I am a bit confused in what model to use. It is a comparison between a "faking" group and regular research group, so we know that the means of the factors in the faking group will be higher and the variances of the factors will be lower (due to unidirectional response distortion). So these two features - free means, free vars - should be modeled I think.

I thought the following line of code would give me the right model, but I saw in the output that all covariances in group 2 were 0.25, I'm guessing for identification purposes.

multipleGroup(dat, model, group = group, invariance=c('slopes', 'intercepts', 'free_var','free_means'))

This following line of code gives me freely estimated covariances and means, but the variances are still fixed to 1 in both groups.

multipleGroup(dat, model, group = group,  invariance=c('slopes', 'intercepts','free_means'))

Is there a way to compare covariance matrices between groups while allowing for differences in factor means and variances, or should I just use the last model I mentioned? Or could you give some advise on how you would go about this? Hope you can clarify some things!

Best, Dirk

Phil Chalmers

unread,
Dec 12, 2017, 2:50:00 PM12/12/17
to Dirk Pelt, mirt-package
On Thu, Dec 7, 2017 at 6:27 AM, Dirk Pelt <dirkp...@gmail.com> wrote:
Dear Phil,

I want to compare the covariance matrices of two groups in a multiple group framework, but I am a bit confused in what model to use. It is a comparison between a "faking" group and regular research group, so we know that the means of the factors in the faking group will be higher and the variances of the factors will be lower (due to unidirectional response distortion). So these two features - free means, free vars - should be modeled I think.

Agreed, so far so good.
 

I thought the following line of code would give me the right model, but I saw in the output that all covariances in group 2 were 0.25, I'm guessing for identification purposes.

multipleGroup(dat, model, group = group, invariance=c('slopes', 'intercepts', 'free_var','free_means'))

Right, this frees the means/variances in the focal group(s) only. Otherwise, the model would likely not be identified. 
 

This following line of code gives me freely estimated covariances and means, but the variances are still fixed to 1 in both groups.

multipleGroup(dat, model, group = group,  invariance=c('slopes', 'intercepts','free_means'))

Is there a way to compare covariance matrices between groups while allowing for differences in factor means and variances, or should I just use the last model I mentioned? Or could you give some advise on how you would go about this? Hope you can clarify some things!

If you want to compare the covariance matrices then all the variance-co-variance components must be in the same metric (which currently they aren't, because the reference group is assumed to have variances of 1 for identification). So you'd have to fix some of the slope parameters to constant values first so that the latent variances could be estimated first, at which point then you can start comparing the components via likelihood-ratio or Wald tests. HTH.

Phil
 

Best, Dirk

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dirk Pelt

unread,
Dec 22, 2017, 7:14:58 AM12/22/17
to mirt-package
Hi Phil,

thank you for your answer, I get the idea behind it. However, this goes a bit beyond my standard invariance testing knowledge, so I hope you could help/advise me with some details:

1. Are there any guidelines on which slopes to constrain? Reading on this forum I found a post where it was suggested to constrain the ones with the highest value for each factor. Is this the way to go?
2. Do you fix the slopes to the constant in both groups or in one group only, and estimate the slope in the other group freely?
3. To what constant should the slopes be fixed, simply 1, or do you determine these constants in other ways?
4. I am working with a highly dimensional (10 factors) but strictly between item (simple structure) MIRT model.  If I want to estimate all the covariances (2 * ((10*9)/2) = 90) in both groups, I am guessing that I then need to constrain 20 slope parameters to constants in order to estimate the 20 variances (10 in each group) first. Is this correct?

Finally, I want to do some auxillary analyses on the theta scores in both groups. Should the fscores command be ran based on the model where slopes and intercepts are fixed to be equal? Thank you again for all your help.

Best and happy holidays,

Dirk

Best, Dirk
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages