Different params ltm vs. mirt

617 views
Skip to first unread message

Piotr Król

unread,
Jun 8, 2021, 4:47:33 AM6/8/21
to mirt-package
Hi, 

I don't know if I have missed something but I receive different items parameters from mirt and ltm model. Why is that? 

I uploaded both models here since they are too big to attach:  https://drive.google.com/drive/folders/184SlTMC8NkDMNi9UnmmSQXMtiNPqvxDE?usp=sharing 

To obtain mirt parameters:
mirt_params <- as.data.frame(coef(mirt_model, simplify = TRUE, IRTpars = TRUE)$items)[1:2]
head(mirt_params)

To obtain ltm parameters:
ltm_params <- as.data.frame(coef(ltm_model))
head(ltm_params)

Phil Chalmers

unread,
Jun 9, 2021, 2:59:37 PM6/9/21
to Piotr Król, mirt-package
The ltm package also estimates IRT parameters using the slope-intercept form by default, though can be changed via the IRT.param input to ltm(). Does that fix your issue?

Phil


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/dd62ba4a-6907-4111-9721-c251aa817d43n%40googlegroups.com.

Piotr Król

unread,
Jun 10, 2021, 5:52:23 AM6/10/21
to mirt-package
Hmm, I'm not sure You are right I use this code: 

ltm_model <- ltm(responses ~ z1)

And I receive discriminations and difficulties (at least these are column names in output). However, I know that documentation suggests something different.


Nevertheless,  slopes/discriminations are different from both packages. Shouldn't they be more or less the same?

Just in case, I create mirt model like this:

mirt_model <- mirt(responses, 1, SE = TRUE, itemtype = "2PL")

Phil Chalmers

unread,
Jun 10, 2021, 12:06:15 PM6/10/21
to Piotr Król, mirt-package
Compare the following:


############
> library(mirt)
> dat <- expand.table(LSAT7)
>
> # slope-intercept
> mod <- mirt(dat, 1, SE=TRUE)
Iteration: 28, Log-Lik: -2658.805, Max-Change: 0.00010

Calculating information matrix...
> coef(mod, simplify=TRUE)$items
              a1         d g u
Item.1 0.9879254 1.8560605 0 1
Item.2 1.0808847 0.8079786 0 1
Item.3 1.7058006 1.8042187 0 1
Item.4 0.7651853 0.4859966 0 1
Item.5 0.7357980 1.8545127 0 1
>
> library(ltm)
> modltm <- ltm(dat ~ z1, IRT.param = FALSE)
> coef(modltm)
       (Intercept)        z1
Item.1   1.8560390 0.9877086
Item.2   0.8079710 1.0808416
Item.3   1.8045036 1.7066397
Item.4   0.4860281 0.7650036
Item.5   1.8545113 0.7357208
>
>
> # discrimination-difficulty
> coef(mod, simplify=TRUE, IRTpars=TRUE)$items
               a          b g u
Item.1 0.9879254 -1.8787456 0 1
Item.2 1.0808847 -0.7475160 0 1
Item.3 1.7058006 -1.0576962 0 1
Item.4 0.7651853 -0.6351358 0 1
Item.5 0.7357980 -2.5204102 0 1
> modltm2 <- ltm(dat ~ z1)
> coef(modltm2)
           Dffclt    Dscrmn
Item.1 -1.8791363 0.9877086
Item.2 -0.7475388 1.0808416
Item.3 -1.0573431 1.7066397
Item.4 -0.6353278 0.7650036
Item.5 -2.5206726 0.7357208
#############

Again, this relates to the IRT.param input to ltm. For mirt the class IRT parameterization transformations is done after the model has been fitted.

Phil


Piotr Król

unread,
Jun 10, 2021, 12:23:32 PM6/10/21
to mirt-package
I see, but this transformation should not change discriminations/slopes in unidimensional models right?

Piotr Król

unread,
Jun 10, 2021, 12:27:00 PM6/10/21
to mirt-package
I mean I have different values of parameters regardless of used parametrization (slopes/discriminations are different in ltm and in mirt)

Phil Chalmers

unread,
Jun 10, 2021, 12:35:39 PM6/10/21
to Piotr Król, mirt-package
Can you provide output of what you mean by 'different'? These software packages should be getting close to the same target, but will terminate in slightly different locations due to the optimizers used. However, they should agree up to around the 4th decimal place.

Phil


Piotr Król

unread,
Jun 10, 2021, 12:53:56 PM6/10/21
to mirt-package
## Call:
## ltm(formula = responses ~ z1)
##
## Coefficients:
##              Dffclt Dscrmn 
## item1  -0.989 0.506 
## item2 -5.054 0.600 
## item3 -4.272 0.520 
## item4 -2.555 0.634 
## item5 -4.759 0.613

coef(mirt_model, simplify=TRUE)
## $items
##                  a1 d g u
## item1 0.925 0.319 0 1
## item2 1.042 2.805 0 1 
## item3 0.936 2.043 0 1 
## item4 1.152 1.390 0 1 
## item5 1.068 2.684 0 1

Phil Chalmers

unread,
Jun 10, 2021, 1:01:26 PM6/10/21
to Piotr Król, mirt-package
Yes, this seems like one of the optimizers failed to converge correctly. You could try different starting values for each function to see if this is a local minimum problem. Though a complete reprex would be best so that we can inspect other discrepancies (like differences in the log-likelihoods).

Phil


Piotr Król

unread,
Jun 15, 2021, 5:59:11 AM6/15/21
to mirt-package
Ok, I have a simple reprex, which shows that estimates are completely different:

responses <- rbind( 
c(1,0,1,0,1), 
c(0,1,0,1,0),
c(1,0,1,0,1),
c(0,1,0,1,0),
c(1,1,0,0,1) 
colnames(responses) <- paste0("item", c(1:5))

(ltm_model <- ltm(responses ~ z1, IRT.param = FALSE))
## Call:
## ltm(formula = responses ~ z1, IRT.param = FALSE) 
## 
##Coefficients:
##          (Intercept)    z1 
## item1 -19.948 64.831 
## item2 59.651 -58.290 
## item3 -66.737 66.507 
## item4 22.871 -63.702 
## item5 -19.948 64.831 
## 
## Log.Lik: -6.244

(mirt_model <- mirt(responses, 1))
## Call:
## mirt(data = responses, model = 1)
## 
## Full-information item factor analysis with 1 factor(s). 
## Converged within 1e-04 tolerance after 11 EM iterations. 
## mirt version: 1.33.2 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61 
## Latent density type: Gaussian 
## 
## Log-likelihood = -5.29347 
## Estimated parameters: 10 
## AIC = 30.58694; AICc = -6.079727 
## BIC = 26.68132; SABIC = -1.734498 
## G2 (21) = 0.04, p = 1 
## RMSEA = 0, CFI = NaN, TLI = NaN

coef(mirt_model, simplify=TRUE)
## $items 
##                         a1 d g u 
## item1 143.300 42.922 0 1 
## item2 -150.145 44.769 0 1 
## item3 150.145 -44.769 0 1 
## item4 -143.302 -42.921 0 1 
## item5 143.300 42.922 0 1 
## 
## $means 
## F1 
## 0 
## 
## $cov 
##       F1 
## F1 1

Phil Chalmers

unread,
Jun 15, 2021, 12:14:43 PM6/15/21
to Piotr Król, mirt-package
Given the magnitude of the parameters the models aren't numerically stable, so you shouldn't interpret them (the slopes are all effectively inf, and therefore all responses are effectively step-functions. Use plot(mirt_model, type = 'trace')) ). Plus, the amount of uncertainty in the model clearly indicates something has gone wrong:

> mirt_model <- mirt(responses, 1, SE=TRUE)
> coef(mirt_model)
$item1
               a1         d  g  u
par        143.30    42.922  0  1
CI_2.5  -29841.75 -9462.986 NA NA
CI_97.5  30128.35  9548.830 NA NA

$item2
                a1          d  g  u
par       -150.145     44.769  0  1
CI_2.5  -42784.180 -12622.643 NA NA
CI_97.5  42483.889  12712.182 NA NA

$item3
                a1          d  g  u
par        150.145    -44.769  0  1
CI_2.5  -42483.935 -12712.160 NA NA
CI_97.5  42784.226  12622.622 NA NA

$item4
                a1         d  g  u
par       -143.302   -42.921  0  1
CI_2.5  -30130.321 -9545.048 NA NA
CI_97.5  29843.718  9459.207 NA NA

$item5
               a1         d  g  u
par        143.30    42.922  0  1
CI_2.5  -29841.75 -9462.986 NA NA
CI_97.5  30128.35  9548.830 NA NA

Do you have an example where both models converged normally but provide different estimates?

Phil


Piotr Król

unread,
Jun 22, 2021, 6:48:23 AM6/22/21
to mirt-package
Okay, I'm not sure I have found suitable example, so I want to go back to previous models (I upload matrix of responses in attachment)

These are summaries of both models:

mirt
## Call:
## mirt(data = responses, model = 1, SE = TRUE) 
## ## Full-information item factor analysis with 1 factor(s). 
## Converged within 1e-04 tolerance after 266 EM iterations. 
## mirt version: 1.33.2 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61 
## Latent density type: Gaussian 
## 
## Information matrix estimated with method: Oakes 
## Second-order test: model is a possible local maximum 
## Condition number of information matrix = 122.3066 
## 
## Log-likelihood = -463630.5 
## Estimated parameters: 1340 
## AIC = 929941.1; AICc = 933700.3 
## BIC = 937631.8; SABIC = 933374.4
## G2 (9999998660) = 892171.1, p = 1 
## RMSEA = 0, CFI = NaN, TLI = NaN

ltm
ltm_model <- ltm(responses ~ z1, IRT.param = FALSE)
ltm_summary <- summary(ltm_model)
ltm_summary$logLik
## [1] -470030.7
ltm_summary$AIC
## [1] 942741.3
ltm_summary$BIC
## [1] 950432.1

What I notice is that ltm params are someshow squished w.r.t slope (points are colored by some categories to make sure that these are the same items).
 pobrane.png
What can be the source of these differences?
responses

Phil Chalmers

unread,
Jun 22, 2021, 11:21:37 PM6/22/21
to Piotr Król, mirt-package
Thanks. The reason this happens is because the number of items in the test is quite large, so using rough numerical integration grids is likely the culprit. E.g., changing mirt's 61 to 151 you can see a small increase in the log-likelihood function, so there's precision to gain. 

> mod <- mirt(responses, 1)
Iteration: 266, Log-Lik: -463630.528, Max-Change: 0.00009
> mod151 <- mirt(responses, 1, quadpts = 151)
Iteration: 500, Log-Lik: -462897.057, Max-Change: 0.00097

For ltm, I believe the package uses something quite small, like only 15 Gauss-Hermite quadrature nodes. Hence, it probably terminates much too early. HTH.

Phil


Piotr Król

unread,
Jun 23, 2021, 6:03:15 AM6/23/21
to mirt-package
Thank you,

I checked ltm results with default mirt's number of quadrature points (61) and mirt results with default ltm's number of quadrature points (15). Here's what I got:

61 quadrature points (mirt's default)

ltm
ltm_model <- ltm(responses ~ z1, IRT.param = FALSE, control = list(GHk = 61))

summary(ltm_model)$logLik
## [1] -466025.8

ltm_params <- as.data.frame(coef(ltm_model))
head(ltm_params)
## (Intercept) z1 
## item1 0.3814645 0.6275291 
## item2 2.8976562 0.7364706 
## item3 2.1075957 0.6449390 

mirt
mirt_model <- mirt(responses, 1, SE=TRUE)
## Log-likelihood = -463630.5

mirt_params <- as.data.frame(coef(mirt_model, simplify = TRUE)$items)[1:2] 
head(mirt_params)
## a1 d 
## item1 0.9245075 0.3186796 
## item2 1.0418570 2.8053719 
## item3 0.9356348 2.0431438 

15 quadrature points (ltm's default)

ltm
ltm_model <- ltm(responses ~ z1, IRT.param = FALSE)

summary(ltm_model)$logLik
## [1] -470030.7

ltm_params <- as.data.frame(coef(ltm_model)) 
head(ltm_params)
## (Intercept) z1 
## item1 0.5007685 0.5062726 
## item2 3.0325412 0.6000710 
## item3 2.2231741 0.5203958 

mirt
mirt_model <- mirt(responses, 1, SE=TRUE, quadpts = 15)
## Log-likelihood = -472591.2

mirt_params <- as.data.frame(coef(mirt_model, simplify = TRUE)$items)[1:2]
head(mirt_params)
## a1 d 
## item1 0.4430672 0.460863 
## item2 0.5269768 2.979918 
## item3 0.4586273 2.183469

Shouldn't they give similar results with analogous number of quadrature points?

Phil Chalmers

unread,
Jun 23, 2021, 10:02:19 AM6/23/21
to Piotr Król, mirt-package
They'll only give similar results when increasing quadrature, not decreasing (mirt uses rectangular quadrature; ltm using GH). 

Phil


Piotr Król

unread,
Jun 23, 2021, 10:44:57 AM6/23/21
to mirt-package
Ok, thank you!!  I noticed also that when specyfying number of quadratures you uses numbers like 31, 61, 151. You just like them or is there some reason?

Phil Chalmers

unread,
Jun 23, 2021, 10:48:05 AM6/23/21
to Piotr Król, mirt-package
Odd numbers are a good idea for theoretical reasons, but yes I just
like these numbers. There's nothing special about them really, just
convention, where more will always be more precise (but at the cost of
increased computing time....which won't always give notably better
results).

> Ok, thank you!! I noticed also that when specyfying number of quadratures you uses numbers like 31, 61, 151. You just like them or is there some reason? To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/0b3f10f1-4700-4032-be55-cf7e874fe578n%40googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/dbd28826-f419-4bf2-b608-273732aad469n%40googlegroups.com.

Piotr Król

unread,
Jul 21, 2021, 7:18:46 AM7/21/21
to mirt-package
Hi Phil,

I experimented with these quadratures but still obtaining different results from ltm and mirt. I checked also with 3rd package (TAM) and it's also completely different. I don't want to dig much now into why are these differences, but I wonder whether should I keep in mind changing some technicals or some other default parameters from mirt (except quadratures and maximum number of cycles) when working with so large dataset (2297 x 670)?

Best regards
Reply all
Reply to author
Forward
0 new messages