I have 2 questions that I'm trying to grapple with. The first is analytical and the second has to do with the mirt package, specifically the M2 function's behavior.
Let me describe my data first. I have repeated-measures data with 3 assessments on three scales, which have 3, 4, and 6 items. I've estimated a two-tier item bifactor model with all items specified as graded response. In order to avoid problems with identification and too many factors, my model has 3 correlated general factors corresponding to each scale and all assessments and 13 specific factors to account for local dependence in the items due to the repeated-measures design. See the code below for a more precise description.
upes.s1mbf <- ' G1 = 1-3, 14-16, 27-29
G2 = 4-7, 17-20, 30-33
G3 = 8-13, 21-26, 34-39
COV = G1*G2*G3
CONSTRAIN = (1, 14, 27, a1), (1, 14, 27, a4),
(2, 15, 28, a1), (2, 15, 28, a5),
(3, 16, 29, a1), (3, 16, 29, a6),
(4, 17, 30, a2), (4, 17, 30, a7),
(5, 18, 31, a2), (5, 18, 31, a8),
(6, 19, 32, a2), (6, 19, 32, a9),
(7, 20, 33, a2), (7, 20, 33, a10),
(8, 21, 34, a3), (8, 21, 34, a11),
(9, 22, 35, a3), (9, 22, 35, a12),
(10, 23, 36, a3), (10, 23, 36, a13),
(11, 24, 37, a3), (11, 24, 37, a14),
(12, 25, 38, a3), (12, 25, 38, a15),
(13, 26, 39, a3), (13, 26, 39, a16))'
upes.s1mbf.spe <- c(rep(seq(from = 1, to = 13), times = 3))
test2 <- bfactor(data = na.omit(upes_s1m2[, c(1:39)]), model = upes.s1mbf.spe, model2 = upes.s1mbf, technical = list(removeEmptyRows = TRUE), accelerate = FALSE)
1. Is this a reasonable approximation to an item factor analytic model that would have modeled the data using a separate factor for each scale for each time point leading to 9 factors in the first tier?
2. I've attempted to calculate the M2 statistic for this model. Unfortunately, R gets killed (maybe it crashes) after a few minutes. Is this because the model is too complex?
This is a follow-up to our conversation on email about explanatory item response models. I'm still trying to get my head around the literature on these models. I haven't read your article yet, but will do so. However, let's say that I designed a brief scale to measure a construct specifically to rectify 2 problems with all of the other scales that measure that construct. Now, I want to examine how the 2 features that solves those problems actually impact the properties of the scale. So, I do a 2x2 experiment with participants in a repeated-measures design responding to different versions of the scale. Would I be able to use explanatory item response models to examine how the presence of a feature of the scale impacts the slope parameter or the item information curve of an item?