three dimensional model - factor covariance

236 views
Skip to first unread message

Jason Lamprianou

unread,
May 5, 2014, 3:56:49 PM5/5/14
to mirt-p...@googlegroups.com
First of all, congratulations for an ecellent package!

I am running a model with three factors:

s <- '
F1 = 1-7
F2 = 8-18
F3 = 19-26
COV=F1*F2*F3
'

The output gives me the covariance between the factors:

SS loadings:  1.796 2.823 2.053 

Factor covariance: 
      F1    F2    F3
F1 0.388 0.161 0.197
F2 0.161 0.552 0.339
F3 0.197 0.339 0.613

However, I would like to get the correlation between the factors, not the covariance. How can I do this?
Thank you

Phil Chalmers

unread,
May 5, 2014, 4:08:01 PM5/5/14
to Jason Lamprianou, mirt-package
Hi Jason, 

This must be a multidimensional Rasch model since the latent variances are usually constrained to 1 for identification reasons (Rasch models this isn't requires). To get the correlation just use R's cov2cor() function. E.g.,

```
s <- '
F1 = 1-3
F2 = 3-5
COV=F1*F2
'
mod <- mirt(expand.table(LSAT7), model=mirt.model(s), itemtype='Rasch')
so <- summary(mod, verbose = FALSE)
str(so) #find the covariance matrix
cov2cor(so$fcor)
```

Cheers.

Phil




--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jason Lamprianou

unread,
May 5, 2014, 4:11:48 PM5/5/14
to Phil Chalmers, mirt-package

Thank you! Very fast response. So, if this was not a rasch model, how would I get the correlation between the factors? In the same way? Or maybe with other models the variance od the factors is always 1 by default so the conversion from covariance to factor is not needed?

Phil Chalmers

unread,
May 5, 2014, 4:17:42 PM5/5/14
to Jason Lamprianou, mirt-package

The same way would work, but yes in other models this isn't needed since a correlation matrix is used by default.

Phil

Jason Lamprianou

unread,
May 6, 2014, 1:13:59 AM5/6/14
to mirt-p...@googlegroups.com
Thank you so much for all the support. And you respond so quickly!

But I have another question, I am afraid. 
I am running the same model in TAM and I get different results regarding the correlation between the factors. This is the TAM model:

I <- 26 # N of items
D <- 3   # N of dimensions
Q <- array( 0 , dim = c( I , D ))
Q[c(1:7), 1] <- 1
Q[c(8:18), 2] <- 1
Q[c(19:26), 3] <- 1
tam.Islamic.3dim <- tam.mml(resp= d , Q=Q  , control=list(snodes=2000,maxiter=253))

So, items 1-7 load on the first factor, items 8-18 load on the second factor and items 19-26 load on the third factor. The model is the same (a Rasch model both in TAM and mirt). 
These are the covariances and correlations for the factors from the TAM

Covariances and Variances
      [,1]  [,2]  [,3]
[1,] 0.152 0.173 0.210
[2,] 0.173 0.443 0.392
[3,] 0.210 0.392 0.430
------------------------------------------------------------
Correlations and Standard Deviations (in the diagonal)
      [,1]  [,2]  [,3]
[1,] 0.389 0.666 0.823
[2,] 0.666 0.666 0.899
[3,] 0.823 0.899 0.655

This is the mirt model: 
s <- '
F1 = 1-7
F2 = 8-18
F3 = 19-26
COV=F1*F2*F3
'
model <- mirt.model(s)
mirt.1 <- mirt(d, model, itemtype="Rasch")

And these are the results of mirt

SS loadings:  1.796 2.823 2.053 

Factor covariance: 
      F1    F2    F3
F1 0.388 0.161 0.197
F2 0.161 0.552 0.339
F3 0.197 0.339 0.613

          F1        F2        F3
F1 1.0000000 0.3478888 0.4039430
F2 0.3478888 1.0000000 0.5827737
F3 0.4039430 0.5827737 1.0000000


So, why are the correlations so different? I am a little bit puzzled... do you happen to have an explanation? Maybe I am running different models?

Thank you again

Jason

Phil Chalmers

unread,
May 6, 2014, 8:48:32 AM5/6/14
to Jason Lamprianou, mirt-package
I'm afraid I'm not very familiar with TAM and can't speak on behalf of
its numerical accuracy or implementations.

Using some of your code to create some population data I get similar
results with both packages, however the log-likelihood is not strictly
decreasing in either implementation. This usually suggests that
something may be up with the model (perhaps not uniquely identified?).
It seems that the MHRM algorithm converges well enough and fairly
close to the population estimates, so it probably is identified, but
it's still somewhat peculiar to see. See attached.

Phil
TAMmirt_sim.R
Reply all
Reply to author
Forward
0 new messages