Hi
I am performing invariance testing and for some of my factors I get the opposite sign to the cut-offs in literature when comparing the models.
For instance, I have a factor with 7 questions with ordinal data (4 point Likert scale), for which I am comparing three groups, each with ~800 people in.
I have tried the following code:
MI_round.model
<- 'A =~ a1 + a2 + a3 + a4 + a5 + a6 + a7'
MI_round <- subset(fulldata,select=c(id,round,a1,a2,a3,a4,a5,a6,a7))
#Config
MI_round_a.config <- measEq.syntax(configural.model = MI_round.model, data = MI_round, ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta", ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016", group = "round")
mod_round_a.config <- as.character(MI_round_a.config)
fit_round_a.config <- cfa(mod_round_a.config, data = MI_round, group="round", ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta")
Summary(fit_round_a.config, fit.measures=TRUE)
#Metric
MI_round_a.metric <- measEq.syntax(configural.model = MI_round.model, data = MI_round, ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta", ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016", group = "round", group.equal="loadings")
mod_round_a.metric <- as.character(MI_round_a.metric)
fit_round_a.metric <- cfa(mod_round_a.metric, data = MI_round, group="round", ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta")
Summary(fit_round_a.metric, fit.measures=TRUE)
#Scalar
MI_round_a.scalar <- measEq.syntax(configural.model = MI_round.model, data = MI_round, ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta", ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016", group = "round", group.equal=c("thresholds","loadings"))
mod_round_a.scalar <- as.character(MI_round_a.scalar)
fit_round_a.scalar <- cfa(mod_round_a.scalar, data = MI_round, group="round", ordered=c("a1","a2","a3","a4","a5","a6","a7"), parameterization = "delta")
Summary(fit_round_a.scalar, fit.measures=TRUE)
These models have good model fit. However, when I compare the models (subtracting config from metric for one comparison and then subtracting metric from scalar for the second and using CompareFit) to test the invariance, I get some results which are the opposite way round from the cut-offs in literature.
Metric - config: ΔCFI = 0,006. ΔRMSEA = -0,020
Scalar - metric: ΔCFI = -0,006. ΔRMSEA = -0,003
As I understand it, the most often used cut-offs for the differences recommended in literature are ΔCFI ≤ -0,01 and ΔRMSEA ≥ 0.015. I am very confused as to why three of my differences are the opposite sign from the cut-offs recommended and I have no idea how to interpret this and where the problem may lie.
I would really appreciate any help and insight on this.
Thank you so much in advance
Zoë