permuteMeasEq example error: "Error in exists(as.character(call$model)) : first argument has length > 1"

336 views
Skip to first unread message

Erik O'Donnell

unread,
Dec 20, 2021, 10:11:47 AM12/20/21
to lavaan
I wasn't able to get permuteMeasEq to work with my model, so I tried running through the example code in the documentation for permuteMeasEq (code found at the bottom of ?permuteMeasEq). 

I pasted the code into R-studio and ran the lines of the script down to "out.config <- permuteMeasEq(nPermute = 20, con = fit.config)", which produces the following error (R 4.1.2, semTools_0.5-5, lavaan_0.6-9):  


  |                                                  |   0%Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in exists(as.character(call$model)) : first argument has length > 1
Error in permuteOnce.mgcfa(con = new("lavaan", version = "0.6.9", call = lavaan::lavaan(model = list( :
  object 'out0' not found

This was the same error I got for my own code, so I'm guessing this could be a bug with permueMeasEq?

Best regards,
Erik O'Donnell

Terrence Jorgensen

unread,
Dec 21, 2021, 6:58:02 AM12/21/21
to lavaan
Please share you syntax.  Perhaps your model was not saved as a named object in your workspace, but instead a string was directly specified in the call?  That would cause trouble when looking for the same model argument to fit it to permuted data.

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Erik O'Donnell

unread,
Dec 21, 2021, 8:35:14 AM12/21/21
to lav...@googlegroups.com
Hi Terrence,

The code below is the code from the ?permuteMeasEq example. When I run it on R version 4.0.3 (semTools 0.5-5, lavaan 0.6-9) it works as expected.

However, when I run it on R version 4.1.2 (semTools 0.5-5, lavaan 0.6-9), I get the error. I.e. the same code runs/fails depending on the R version I use. 

Here is the code, no other variables in the environment, freshly restarted R-studio and R session:


library(lavaan)
library(semTools)
########################
## Multiple-Group CFA ##
########################

## create 3-group data in lavaan example(cfa) data
HS <- lavaan::HolzingerSwineford1939
HS$ageGroup <- ifelse(HS$ageyr < 13, "preteen",
                      ifelse(HS$ageyr > 13, "teen", "thirteen"))

## specify and fit an appropriate null model for incremental fit indices
mod.null <- c(paste0("x", 1:9, " ~ c(T", 1:9, ", T", 1:9, ", T", 1:9, ")*1"),
              paste0("x", 1:9, " ~~ c(L", 1:9, ", L", 1:9, ", L", 1:9, ")*x", 1:9))
fit.null <- cfa(mod.null, data = HS, group = "ageGroup")

## fit target model with varying levels of measurement equivalence
mod.config <- '
visual  =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed   =~ x7 + x8 + x9
'
miout <- measurementInvariance(model = mod.config, data = HS, std.lv = TRUE,
                               group = "ageGroup")

(fit.config <- miout[["fit.configural"]])
(fit.metric <- miout[["fit.loadings"]])
(fit.scalar <- miout[["fit.intercepts"]])


####################### Permutation Method

## fit indices of interest for multiparameter omnibus test
myAFIs <- c("chisq","cfi","rmsea","mfi","aic")
moreAFIs <- c("gammaHat","adjGammaHat")

## Use only 20 permutations for a demo.  In practice,
## use > 1000 to reduce sampling variability of estimated p values

## test configural invariance
set.seed(12345)
out.config <- permuteMeasEq(nPermute = 20, con = fit.config)                       #<-- this line produces the error

--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/VABEwZgdKlg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/8c7d8329-9e7a-4da7-b6e0-569c22b2c424n%40googlegroups.com.

Erik O'Donnell

unread,
Dec 22, 2021, 7:04:57 AM12/22/21
to lav...@googlegroups.com
I tried running the example code on a different computer, using R 4.1.1, and I get the same error.

Terrence Jorgensen

unread,
Jan 2, 2022, 9:09:41 PM1/2/22
to lavaan
The problem is that the fit.config object in that syntax was not generated from a typical lavaan() call, in which the model= argument was an object that can be found in your workspace.  Without relying on the deprecated measurementInvariance() function (now removed from the example in the development version of semTools), there is not a problem.

out <- cfa(model = mod.config, data = HS, std.lv = TRUE, group = "ageGroup")
permuteMeasEq(nPermute = 20, con = out) # no error

Erik O'Donnell

unread,
Jan 3, 2022, 1:31:09 PM1/3/22
to lav...@googlegroups.com
Happy new year! That did fix the problem, thanks :-) Here is another error for you though (R 4.0.3 and R 4.1.2; semTools_0.5-5, lavaan_0.6-9), from using parallelType="snow":

Code:

library(semTools)


## create 3-group data in lavaan example(cfa) data
HS <- lavaan::HolzingerSwineford1939
HS$ageGroup <- ifelse(HS$ageyr < 13, "preteen",
                      ifelse(HS$ageyr > 13, "teen", "thirteen"))

## fit target model with varying levels of measurement equivalence
mod.config <- '
visual  =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed   =~ x7 + x8 + x9
'

out <- cfa(model = mod.config, data = HS, std.lv = TRUE, group = "ageGroup")

permuteMeasEq(nPermute = 20, con = out) # no error
permuteMeasEq(nPermute = 20, con = out, parallelType = "snow") # error

Error message:
Your RNGkind() was changed from Mersenne-Twister to L'Ecuyer-CMRG, which is required for reproducibility  in parallel jobs.  Your RNGkind() has been returned to Mersenne-Twister but the seed has not been set.  The state of your previous RNG is saved in the slot  named 'oldSeed', if you want to restore it using  the syntax:
.Random.seed[-1] <- permuteMeasEqObject@oldSeed[-1]
No AFIs were selected, so only chi-squared will be permuted.

Error in get(name, envir = envir) : object 'permuteOnce.mgcfa' not found



This is on windows, but just to be sure, here is the error message from using parallelType = "multicore":

'multicore' option unavailable on Windows. Using 'snow' instead.
No AFIs were selected, so only chi-squared will be permuted.

Error in get(name, envir = envir) : object 'permuteOnce.mgcfa' not found

--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/VABEwZgdKlg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.

Terrence Jorgensen

unread,
Jan 4, 2022, 12:11:56 PM1/4/22
to lavaan
Here is another error for you though (R 4.0.3 and R 4.1.2; semTools_0.5-5, lavaan_0.6-9), from using parallelType="snow":

This is a known issue.  Don't hold your breath, sorry.

Erik O'Donnell

unread,
Jan 5, 2022, 10:11:02 AM1/5/22
to lav...@googlegroups.com
This is a known issue.  Don't hold your breath, sorry.

No worries, semTools is great software and I love using it :-) I wish I had the stats and programming skills to contribute rather than complain! 

--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/VABEwZgdKlg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.

Erik O'Donnell

unread,
Jan 5, 2022, 11:03:00 AM1/5/22
to lavaan
By the way, do you have any suggestions for interpreting AFIs from permuteMeasEq when some are significant and others are not? For example, between males and females I get (nPermute=1000):

                    AFI.Difference p.value
rmsea.robust          0.089   0.978
srmr                         0.053   0.000
cfi.robust                 0.894   0.000
tli.robust                  0.879   0.000

Between two groups "experienced vs not experienced" (nPermute=1000):

                   AFI.Difference p.value
rmsea.robust          0.088   0.979
srmr                         0.053   0.000
cfi.robust                0.899   0.958
tli.robust                 0.885   0.958

I did not expect e.g. SRMR, CFI and TLI to be p<0.001 but then RMSEA p=0.978. Nor patterns like everything p>0.95 but then SRMR p<0.001. Any suggestions for how to think about this?

Sample size ~ 2500, I let it run overnight :-)

Terrence Jorgensen

unread,
Jan 10, 2022, 4:57:04 PM1/10/22
to lavaan
do you have any suggestions for interpreting AFIs from permuteMeasEq when some are significant and others are not? 

No, those are strange patterns.  Try taking a look at the plots from hist(), see the class?permuteMeasEq help page.  Maybe my function is not comparing the correct stats when there is a "robust" suffix or something.  You can also inspect the @AFI.obs and @AFI.dist slots of the permuteMeasEq object, and calculate the p value manually to check if there is a discrepancy.

Erik O'Donnell

unread,
Jan 11, 2022, 10:04:38 AM1/11/22
to lav...@googlegroups.com
The plots from hist() and the values in @AFI.obs and @AFI.dist confirm the strange pattern in the output, it doesn't seem like wrong stats are being compared.

I get the same pattern (e.g. one significant, the others p < 0.001) for the non-robust versions of RMSEA, CFI and TLI. I tried some different estimators (MLR, ML and MLM), same result.

I tried to find some relevant papers, Lai and Green (https://www.tandfonline.com/doi/full/10.1080/00273171.2015.1134306) argue RMSEA and CFI are not well understood. They show how sometimes CFI and RMSEA can lead to different fit interpretations. Maybe this type of disagreeing fit-indices dynamic is in my model somehow, and this plays out in the empirical distributions?

The fit of my model is mediocre by conventional cut-offs, with very high latent correlations (which is why I wanted to use permuteMeasEq in the first place; I find it very hard to interpret changes in configural model fit when the model doesn't fit super well to begin with).



--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/VABEwZgdKlg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.

Terrence Jorgensen

unread,
Jan 11, 2022, 11:54:57 AM1/11/22
to lavaan

The plots from hist() and the values in @AFI.obs and @AFI.dist confirm the strange pattern in the output, it doesn't seem like wrong stats are being compared.

I get the same pattern (e.g. one significant, the others p < 0.001) for the non-robust versions of RMSEA, CFI and TLI. I tried some different estimators (MLR, ML and MLM), same result.

Can you verify that the @AFI.dist differs for rmsea and rmsea.robust?  That's what I had in mind about mistakenly saving the wrong indices.

My simulations respected the exchangeability assumption, which is a limitation I noted in the seminal paper.  I planned to probe further, but my grant proposal was rejected and I changed to a new topic.  So I have returned to permutation for a few years.  Now I'm wondering if that has something to do with this.  I only expected violations of exchangeability to affect tests of metric and scalar invariance, but perhaps they could affect the configural test as well.  Can you post your group-specific sample stats? lavInspect(fit, "sampstat")

Erik O'Donnell

unread,
Jan 14, 2022, 1:09:29 PM1/14/22
to lav...@googlegroups.com
Can you verify that the @AFI.dist differs for rmsea and rmsea.robust?  That's what I had in mind about mistakenly saving the wrong indices.

Yes, @AFI.dist is different for rmsea and rmsea.robust. I also checked cfi, which is different from cfi.robust.

Can you post your group-specific sample stats?

I am doing quite a few MI tests. Groups sizes range from roughly equal to quite different, some example of (no. of respondents per groups) are: 371 & 2086,  687 & 1943, 1000 & 1415.

Do you think there could be a connection between group size differences and the strange AFI pattern? I could check whether this is the case in my data...


--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/VABEwZgdKlg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.

Terrence Jorgensen

unread,
Jan 25, 2022, 5:48:23 PM1/25/22
to lavaan
Do you think there could be a connection between group size differences and the strange AFI pattern? I could check whether this is the case in my data...

It would be worth checking.  But I can't currently guess what the mechanism would be.
Reply all
Reply to author
Forward
0 new messages