ability estimation for multidimensionnal scale

102 views
Skip to first unread message

hendl...@gmail.com

unread,
Feb 2, 2016, 8:44:44 AM2/2/16
to mirt-package
Hi phil

Please I have a question about ability estimation. I have a three dimensional scale and I want to estimate the ability of examinee for each dimension as well as for the overall scale. I used the function fscores for this purpose , but I notice that the avearge theta scores for the first sub-dimension (9 out of 24 items) is greater than the average of theta scores for the whole scale. Is is logical? or am I misssing something?

I used to work in a Classical test theory pradigm which is an additive score, so this result seems odd to me.

Any clarification would be very appreciated.

Thanks a lot

Hend.

Phil Chalmers

unread,
Feb 2, 2016, 8:16:44 PM2/2/16
to hendl...@gmail.com, mirt-package
Hi Hend,

Depending on what method you used with fscores, the estimates should should be around the mean of the latent variable (which by default is 0 for each latent trait). The values won't be exactly equal to 0 in any particular method simply because of the estimation algorithms, but in general you can't compare the means of the trait across estimates because they are all approximately equal to 0 in the population. So any numerical differences are really meaningless and not at all an issue. HTH.   

Phil

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

吳大

unread,
Aug 30, 2017, 9:16:15 PM8/30/17
to mirt-p...@googlegroups.com, hendl...@gmail.com

Dear Chalmers,
I have the same problem with Hend. I need to compare the ability of different dimensions in a three dimension mirt model.
How can I obtain the non-standardized scores of persons in each dimension ?
Can fscores() do that?
The following is my R program:

library("mirt")
setwd('~/My_R')
# data
D27 <- read.csv("all27.csv")
# the confirmatory model:
mod1.mixed27<-mirt.model
('
F1 = 1,6,9,13,14,17,19,21,23 
F2 = 2,3,4,7,8,10,15,16,18,20,24,25,27
F3 = 5,11,12,22,26
START = (14,a1,1.0),(2,a2,1.0),(5,a3,1.0)
FIXED = (14,a1),(2,a2),(5,a3)
COV = F1*F2*F3
')
# data types
mixed.types <- rep("gpcm", 27)
mixed.types[3]  <- "2PL"
mixed.types[11] <- "2PL"
mixed.types[21] <- "2PL"

mod1.27 <- mirt(D27, mod1.mixed27, itemtype = mixed.types, method = 'MHRM') 

coef1.27.sim <- coef(mod1.27, simplify=TRUE)
write.csv(coef1.27.sim$items, file="coef1_27_sim__iterms.csv")
mod1.27.abil <- fscores(mod1.27, QMC = TRUE)
write.csv(mod1.27.abil, file="mod1_27_abil.csv")


Thanks
Max Wu
 
Phil Chalmers於 2016年2月3日星期三 UTC+8上午9時16分44秒寫道:

Seongho Bae

unread,
Sep 1, 2017, 9:16:51 AM9/1/17
to mirt-package, hendl...@gmail.com
What is the non-standardized score you meant? Converting original test score scales? or IRT True-score equating? or else?

Seongho

2017년 8월 31일 목요일 오전 10시 16분 15초 UTC+9, 吳大 님의 말:
Reply all
Reply to author
Forward
0 new messages