Error returned by permuteMeasEq when testing threshold invariance

139 views
Skip to first unread message

Ryan

unread,
Sep 25, 2019, 10:32:37 PM9/25/19
to lavaan
Hi folks,

I am testing for measurement invariance across two gender groups using permuteMeasEq. My dataset is composed of 800 responses to a 16-item instrument, with each item scored on a five-point Likert scale. On the basis of prior exploratory analyses, I am fitting a bifactor model with three factors plus a general factor. I am aware that measEq.syntax only has limited support for bifactor models, so have tried to follow all the recommendations from help files and examples. Based on my understanding of this exchange, I used ID.fac = "UV" and manually freed latent variable means when generating syntax (by replacing zeroes with NAs in the "## LATENT MEANS/INTERCEPTS:" portion of the syntax).

I am able to run permuteMeasEq analyses that compare the configural model to a null model, and also test for metric and scalar invariance. I have additionally run models that are partially invariant at the scalar level. There are apparently no issues with model convergence (and AFIs, incidentally, are very good).

However, when I attempt to test for threshold invariance - second in the sequence, after testing the configural model - I receive the following error: "Error in A %*% P.inv : requires numeric/complex matrix/vector arguments". I am unsure of what is causing this error.

I have attached the relevant syntax for reference, and am more than happy to provide any other useful information/files. I am running R version 3.5.0 on a Windows device, with semTools 0.5-2 and lavaan 0.6-5

Many thanks in advance,

Ryan
syntax.txt

Terrence Jorgensen

unread,
Sep 28, 2019, 9:16:51 AM9/28/19
to lavaan
I used ID.fac = "UV" and manually freed latent variable means when generating syntax (by replacing zeroes with NAs in the "## LATENT MEANS/INTERCEPTS:" portion of the syntax).

You can't do that until thresholds, loadings, and intercepts are constrained to equality.  If you free them in the configural model, it is not identified.

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Ryan

unread,
Sep 28, 2019, 10:24:58 AM9/28/19
to lavaan
Thanks for your reply, Terrence. I will run the models without altering the syntax (until the constraints you mention are applied).

Ibrahim Nasser

unread,
Sep 28, 2019, 5:04:44 PM9/28/19
to lav...@googlegroups.com
I used the permutation test to compare two models using chisquare and fit indices. One model is nested within the other model. I have noticed that the SRMR p-value behaves very differently from all other coefficients (chisquare, RMSEA, CFI). Perhaps it is because SRMR does not take the degree of freedom into account. But I don't know. Is the SRMR recommended for a model comparison or should I trust p-values from other coefficients?

Thanks for your reply, Terrence. I will run the models without altering the syntax (until the constraints you mention are applied).


--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+unsub...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/d526d136-b8d1-49d7-9304-3033d1eda25c%40googlegroups.com.

Terrence Jorgensen

unread,
Sep 29, 2019, 7:43:53 AM9/29/19
to lavaan
Is the SRMR recommended for a model comparison or should I trust p-values from other coefficients?

I didn't see much of a difference for SRMR's behavior in my simulation, but I have noticed the AIC had quite different p values, which I haven't investigated further.  I'm certain there are conditions under which the different indices could yield different permutation results, but I don't know of anyone revealing those, so I have no guidance.

Ryan

unread,
Sep 30, 2019, 10:28:04 PM9/30/19
to lavaan
Terrence, I just ran the analyses again without freeing the latent variable means (I generated syntax and fit models in a single step, as below). I still get the same error when using either anova() or permuteMeasEq() to conduct tests:

Error in A %*% P.inv : requires numeric/complex matrix/vector arguments

The relevant code, following from the syntax provided in my original post:

fit.config <- measEq.syntax(configural.model = bifactor.model, data = CFA, ordered = TRUE, parameterization = "theta", estimator = "WLSMV", ID.fac = "UV", ID.cat = "Wu.Estabrook.2016", group = "Gender", return.fit = TRUE)
fit.thresh <- measEq.syntax(configural.model = bifactor.model, data = CFA, ordered = TRUE, parameterization = "theta", estimator = "WLSMV", ID.fac = "UV", ID.cat = "Wu.Estabrook.2016", group = "Gender", group.equal = "thresholds", return.fit = TRUE)
out.thresh <- permuteMeasEq(nPermute = 1000, uncon = fit.config, con = fit.thresh, param = "thresholds", AFIs = myAFIs, moreAFIs = moreAFIs, null = fit.null, parallelType = "none", iseed = 3141593)

I wonder what's happening here. Again, I do not encounter the error during the other steps of measurement invariance testing.

Terrence Jorgensen

unread,
Oct 2, 2019, 11:35:50 AM10/2/19
to lavaan
I wonder what's happening here.

Me too.  I don't get an error with example data from semTools:

mod.cat <- ' FU1 =~ u1 + u2 + u3 + u4
             FU2 =~ u5 + u6 + u7 + u8 '

## configural model: no constraints across groups or repeated measures
syntax
.config <- measEq.syntax(configural.model = mod.cat, data = datCat,
                               ordered
= paste0("u", 1:8),
                               parameterization
= "theta",
                               ID
.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                               
group = "g")
mod
.config <- as.character(syntax.config)
fit
.config <- cfa(mod.config, data = datCat, group = "g",
                  ordered
= paste0("u", 1:8), parameterization = "theta")
## equal thresholds
fit
.thresh <- measEq.syntax(configural.model = mod.cat, data = datCat,
                            ordered
= paste0("u", 1:8),
                            parameterization
= "theta",
                            ID
.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                           
group = "g", group.equal = "thresholds",
                           
return.fit = TRUE)
myAFIs
<- c("chisq","chisq.scaled","df","cfi.scaled","tli.scaled","rmsea.scaled","srmr")
out.thresh <- permuteMeasEq(nPermute = 10, uncon = fit.config, con = fit.thresh,
                            param
= "thresholds", AFIs = myAFIs, moreAFIs = NULL)
summary
(out.thresh)


Again, I do not encounter the error during the other steps of measurement invariance testing.

Maybe the problem is you didn't estimate thresholds (check your model summaries).  I didn't notice before, but you are setting ordered=TRUE instead of actually telling lavaan the names of your ordinal variables:

Ryan

unread,
Oct 8, 2019, 1:44:45 AM10/8/19
to lavaan
Thanks again for the suggestions, Terrence. Unfortunately, neither of them solved the issue: thresholds were estimated in each model, and changing the ordered argument had no effect.

I tried to reproduce the example you gave, but with a bifactor structure in mod.cat (as is necessary in my applied case). Unfortunately, the model did not converge when using the datCat dataset, so I found a reproducible dataset with a bifactor structure here. After importing it, I added a column for gender, and renamed it datCat. When I then ran a version of your code, I still got the requires numeric/complex matrix/vector arguments error.

Is it possible that the error is generated when any bifactor model is tested? Perhaps, if you can reproduce what I did, you might be able to determine the cause; I have included my code here:

gist <- "https://gist.github.com/ericpgreen/7091485/raw/f4daec526bd69557874035b3c175b39cf6395408/simord.R"
source_url(gist, sha1="da165a61f147592e6a25cf2f0dcaa85027605290")
mydata[ , "g"] <- rep(c("M","F"),each=250)
mydata$g<-as.factor(mydata$g)
datCat<-mydata

mod.cat <- '
   f1 =~ v1 + v2 + v3
   f2 =~ v4 + v5 + v6
   f3 =~ v7 + v8 + v9
   g1 =~ v9 + v8 + v7 + v6 + v5 + v4 + v3 + v2 + v1
   '

## configural model: no constraints across groups or repeated measures
syntax.config <- measEq.syntax(configural.model = mod.cat, data = datCat,
                                ordered = paste0("v", 1:9),
                                parameterization = "theta", estimator = "WLSMV",

                                ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                                group = "g")
mod.config <- as.character(syntax.config)
fit.config <- cfa(mod.config, data = datCat, group = "g",
                   ordered = paste0("v", 1:9), parameterization = "theta")
## equal thresholds
fit.thresh <- measEq.syntax(configural.model = mod.cat, data = datCat,
                             ordered = paste0("v", 1:9),
                             parameterization = "theta", estimator = "WLSMV",

                             ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                             group = "g", group.equal = "thresholds",
                             return.fit = TRUE)
myAFIs <- c("chisq","chisq.scaled","df","cfi.scaled","tli.scaled","rmsea.scaled","srmr")
out.thresh <- permuteMeasEq(nPermute = 10, uncon = fit.config, con = fit.thresh,
                             param = "thresholds", AFIs = myAFIs, moreAFIs = NULL)

Terrence Jorgensen

unread,
Oct 8, 2019, 11:42:25 AM10/8/19
to lavaan
Thanks for the reproducible example.  I couldn't fit either model without identification issues, so I wouldn't expect permutation to go well.

Configural:

Warning messages:
1: In lav_model_vcov(lavmodel = lavmodel2, lavsamplestats = lavsamplestats,  :
  lavaan WARNING:
    Could not compute standard errors! The information matrix could
    not be inverted. This may be a symptom that the model is not
    identified.
2: In lav_test_satorra_bentler(lavobject = NULL, lavsamplestats = lavsamplestats,  :
  lavaan WARNING: could not invert information matrix needed for robust test statistic


Thresholds:

Warning messages:
1: In lav_model_vcov(lavmodel = lavmodel2, lavsamplestats = lavsamplestats,  :
  lavaan WARNING:
    The variance-covariance matrix of the estimated parameters (vcov)
    does not appear to be positive definite! The smallest eigenvalue
    (= 8.418815e-14) is close to zero. This may be a symptom that the
    model is not identified.


Does this occur with your own data, too?  The warning occurred for the configural model even after I set orthogonal=TRUE, which is recommended.

Ryan

unread,
Oct 8, 2019, 8:15:16 PM10/8/19
to lavaan
Aha! Thank you, Terrence - this could well be the issue at hand. I really should have done my due diligence and checked the warnings before posting (apologies!). I don't get the same issue with my data, but fitting my models does return the following:

1: In lav_model_vcov(lavmodel = lavmodel2, lavsamplestats = lavsamplestats,  ... :
  lavaan WARNING:
    The variance-covariance matrix of the estimated parameters (vcov)
    does not appear to be positive definite! The smallest eigenvalue
    (= 2.902788e-14) is close to zero. This may be a symptom that the
    model is not identified.

Still, I suspect my models are correctly identified. I wrote them on the basis of results from both an EFA (fa) and an omega analysis, which were identical in structure. More importantly, they are grounded in theory and prior research, including outputs from the developers of the instrument I am analysing. With all that being said, it certainly is possible that the model is not identified.

Searching for other examples of where people have encountered the same warning in a similar context, I found this thread. Do you think it is appropriate to implement the potential fixes suggested by the respondent? There is more discussion of the same warning here, and seems you and Yves have dealt with it in the context of EFAs here. It appears in this tutorial too. I am still a bit unsure of how to proceed in my case, however - any advice would be greatly appreciated.

If you think it would help, I can send you my deidentified dataset - it seems that using reproducible ones can create different issues!

Ryan

unread,
Oct 8, 2019, 10:03:03 PM10/8/19
to lavaan
I should add that if I run the same model without including the general factor, I get the same warnings but do not encounter the problematic error (i.e. anova and permuteMeasEq run fine).
Reply all
Reply to author
Forward
0 new messages