Getting the same Chi-square and df when comapring thresholds and configrual models

114 views
Skip to first unread message

ahmad

unread,
Jun 26, 2025, 9:22:07 AM6/26/25
to lavaan
Hi,
I'm using measEq.syntax() to test for measurement invariance. However, I'm getting the same chi-square test statistic and degrees of freedom for both the configural and thresholds models. I double-checked my code, and I have correctly constrained the thresholds to be equal across groups.
Here is the code I'm using:
syntax.thresh <- measEq.syntax(configural.model = measurement_model_neuro,
                               # NOTE: data provides info about numbers of
                               # groups and thresholds
                               data = data10233,
                               ordered = c("emotional_abuse_YN_8y12y","emotional_Neglect_YN_8y17y","physical_Abuse_YN_8y12y","sexual_abuse_YN_11y18y", "financial_diff_YN_8y12y","parents_suicide_YN_7y11y","parent_convicted_YN_9y12y","parental_separation_YN_8y11y","substance_household_YN_7y11y","violence_between_parents_YN_8y11y","overt_bullying8y12y_any_binary","respiratory","skin","cardiovascular","gasterointestinal","disputed_aetiology","other_medical_conditions","inflamatory_immune_system",,"kq321","kq329","kq334","kq340","kq344"),
                               parameterization = "theta",
                               ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                               group = "broad_group",
                               group.equal = "thresholds")

Here is the result:
lavaan->lavTestLRT(): lavaan NOTE: The “Chisq” column contains standard test statistics, not the robust test that should be reported per model. A robust difference test is a function of two standard (not robust) statistics. Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq) fit.config 582 1508.3 fit.thresh 582 1508.3 -0.0000061871 0 Warning messages: 1: lavaan->lavTestLRT(): Some restricted models fit better than less restricted models; either these models are not nested, or the less restricted model failed to reach a global optimum.Smallest difference = -0.0000061871076013631.

 Additionally, when I conducted the configural and thresholds models without using measEq.syntax(), just specifying group = "broad_group" and group.equal = "thresholds"—I observed different results:
with group.equal = "thresholds":
Model Test User Model: Standard Scaled Test Statistic 1719.691 1415.315 Degrees of freedom 587 587 P-value (Chi-square) 0.000 0.000

with measEq.syntax():
Model Test User Model: Standard Scaled Test Statistic 1508.277 1264.518 Degrees of freedom 582 582 P-value (Chi-square) 0.000 0.000


I get different results for the thresholds model. The configural model results are the same regardless of whether I use measEq.syntax() or the direct approach. Thu, I am wondering if the reults are correct and the different result I get is due to   ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016"?

Best,
Ahmad

Edward Rigdon

unread,
Jun 26, 2025, 10:31:11 AM6/26/25
to lav...@googlegroups.com
Wu and Estabrook showed that identification in a multigroup model with ordinal variables can be achieved in several different ways. Imposing a new constraint may allow other constraints to be relaxed while still achieving identification. The "ID.cat = "Wu.Estabrook.2016"" setting allows this, so you must specifically request new constraints along with old constraints. Here is some sample syntax--which borrows from lavaan documentation, using the Holzinger-Swineford dataset, that I used in my class. I think you can see how to adapt it to your situation. This syntax uses "delta" parameterization, not "theta":

## MK 9200 Spring 2025 Week 10 Syntax

## Ordinal data and multigroup analysis, following Wu & Estabrook

 

# load packages

library(lavaan)

library(semTools)

 

# use builtin Holzinger-Swineford (1939) data

data<-HolzingerSwineford1939

summary(data)

 

# select the 9 variables, leaving aside the grouping variable "school"

# in preparation for converting data to ordinal for this demonstration

# PS DON'T convert continuous data to ordinal except under

# special circumstances

HStri<-data[,c(7:15)]

 

# Reduce continuous data to ordinal with 3 response categories

HStri <- as.data.frame( lapply(HStri[,1:9], cut, 3, labels=F) )

 

# Bring back school grouping variable--did not want to corrupt that

HStri$school<-data$school

 

str(HStri)

 

# See what this turned x1 and x2 into

with(HStri,table(x1,x2))

 

# CFA model

HSbasemodel<-'

               Spatial =~ x1 + x2 + x3

               Verbal =~  x4 + x5 + x6

               Speed =~   x7 + x8 + x9'

 

## Recall, original result, single group

originalone.out <- sem(model=HSbasemodel,data=data)

summary(originalone.out, fit=T)

 

# original result, multigroup by school

originalMG.out <- sem(model=HSbasemodel,data=data, group="school")

summary(originalMG.out,fit=T)

 

 

# Pretend it is continuous and homogeneous (single group)

cont.out <- sem(model=HSbasemodel,data=HStri)

summary(cont.out)

 

# Tell lavaan data are ordinal, still single-group

ord.out <- sem(model=HSbasemodel,data=HStri,

                           ordered=c(paste0("x",1:9)))

summary(ord.out)

 

# Use measEq.syntax for configural invariance--the long way

syntax.config <- measEq.syntax(configural.model = HSbasemodel, data = HStri,

                               ordered=c(paste0("x",1:9)),

                               parameterization = "delta",

                               ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",

                               group = "school")

 

# Tell R that this syntax string is character data

HStri.config<-as.character(syntax.config)

 

# Look at it--not e the end of line markers

HStri.config

 

# Use that syntax as the model

# multigroup analysis with school as grouping variable

# delta parameterization

HStri.config.out<-cfa(model=HStri.config,data=HStri,group="school",

                      ordered=c(paste0("x",1:9)),

                      parameterization="delta")

 

summary(HStri.config.out)

 

# for the rest of these steps, do it the short way--create and run

 

# Invariance of thresholds

# Notice that DF does NOT change--

# new constraints are added while others are relaxed

# So invariance of thresholds is equivalent to configural invariance

 

 

 

 

 

 

fit.thresh <- measEq.syntax(configural.model = HSbasemodel, data = HStri,

                            parameterization = "delta",

                            ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",

                            group = "school",

                            ordered=c(paste0("x",1:9)),

                            group.equal=c("thresholds"),return.fit=T)

 

summary(fit.thresh)

 

# Invariance of thresholds and loadings

 

fit.weak <- measEq.syntax(configural.model = HSbasemodel, data = HStri,

                          parameterization = "delta",

                          ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",

                          ordered=c(paste0("x",1:9)),

                          group = "school",

                          group.equal=c("thresholds","loadings"),return.fit=T)

 

summary(fit.weak)

 

# Invariance of thresholds, lo9adings and intercepts

fit.strong <- measEq.syntax(configural.model = HSbasemodel, data = HStri,

                            parameterization = "delta",

                            ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",

                            ordered=c(paste0("x",1:9)),

                            group = "school",

                            group.equal=c("thresholds","loadings","intercepts"),return.fit=T)

 

summary(fit.strong)

 

# Though the first two models are different, they are equivalent

anova(fit.thresh,HStri.config.out)

 

# Chi-square differences to assess the nested constraints

# We report robust chi-square, but difference tests are always

# based on un-modified chi-square statistics

anova(fit.strong,fit.weak,HStri.config.out)



--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/lavaan/f3104769-34e9-417e-bbc5-c1329dbdee4dn%40googlegroups.com.

ahmad

unread,
Jun 26, 2025, 1:02:43 PM6/26/25
to lavaan
Hello Edward,

Thank you for your response. I followed your guidance and conducted the measurement invariance analysis exactly as described in the SEMTools documentation. However, I obtained the same Chi-square value and degrees of freedom for both the configural and thresholds models. I'm unsure whether this result is correct.

BW,
Ahmad

Edward Rigdon

unread,
Jun 26, 2025, 2:45:42 PM6/26/25
to lav...@googlegroups.com
If you have multiple indicators for each common factor, then threshold invariance and "weak invariance" should not have the same DF. "Weak invariance" constrains thresholds and loadings, while threshold invariance constrains only thresholds. So if DF is the same across the two models, then, in the background, lavaan has relaxed some other constraints when you requested constrained loadings.

Terrence Jorgensen

unread,
Jun 27, 2025, 5:06:19 AM6/27/25
to lavaan
I obtained the same Chi-square value and degrees of freedom for both the configural and thresholds models.

It is expected if your ordinal variables have < 4 categories.  With 3-categories, threshold invariance is statistically equivalent to any configural model.  With only 2 categories, you cannot distinguish differences in thresholds, loadings, or intercepts, so you need to constrain them all at once (like the help-page example shows for binary items).

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam
http://www.uva.nl/profile/t.d.jorgensen

ahmad

unread,
Jun 27, 2025, 5:35:46 AM6/27/25
to lavaan
Thank you, Dr. Jorgensen, for your response.

My measurement model includes a mix of indicator types: four continuous indicators for one independent latent factor, five ordinal (0, 1, 2) indicators for another independent latent variable, ten binary indicators for a latent mediator, and seven binary indicators for the dependent latent variable. As such, it is a mixed measurement model, and, as you mentioned, it becomes more complex, particularly since I need to model the binary indicators simultaneously.
I'm currently unsure how to properly conduct measurement invariance testing for this type of mixed measurement model. Any guidance or recommendations would be greatly appreciated.

Best wishes,
Ahmad

Terrence Jorgensen

unread,
Jul 1, 2025, 5:00:15 PM7/1/25
to lavaan
I'm currently unsure how to properly conduct measurement invariance testing for this type of mixed measurement model. 

There is no guidance (in the scientific literature) that I am aware of.  I would evaluate invariance separately per factor, imposing invariance constraints in the order appropriate for those indicators' measurement scale.

Terrence D. Jorgensen    (he, him, his)

Reply all
Reply to author
Forward
0 new messages