> compareFit(meq.list) Error in TEST[[2]] : subscript out of bounds
I get this error:> compareFit(meq.list) Error in TEST[[2]] : subscript out of boundsAny idea of what I should do differently?
devtools::install_github("simsem/semTools/semTools")
I believe I am.
There are warnings about non positive definite variances... I guess that's why there's no results and an out of bound error?
João Marôco
[Sent from mulher not that smart smartphone, with more errors than usual]
Dear Terrence,
I installed both semTools and lavaan lastest versions from git_hub.
Still, have an error:
Warning messages:
1: In lav_model_vcov(lavmodel = lavmodel2, lavsamplestats = lavsamplestats, :
lavaan WARNING:
Could not compute standard errors! The information matrix could
not be inverted. This may be a symptom that the model is not
identified.
2: In lav_test_satorra_bentler(lavobject = NULL, lavsamplestats = lavsamplestats, :
lavaan WARNING: could not invert information matrix
The models’ seems to be unidentified? I wonder if for ordinal items some other identification procedure (thresholds, or similar) needs to be done or if it’s just a problem with sample size (n=390, second order factor with 15 ordinal items, 8 groups) and polychoric correlation estimation…
I attach the rds database, if you care to try…
Thanks in advance!
João
From: lav...@googlegroups.com <lav...@googlegroups.com> On Behalf Of Terrence Jorgensen
Sent: 27 de junho de 2019 22:39
To: lavaan <lav...@googlegroups.com>
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To post to this group, send email to
lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
To view this discussion on the web visit
or if it’s just a problem with sample size (n=390, second order factor with 15 ordinal items, 8 groups) and polychoric correlation estimation…
No, no... 390 per group... But, yes... For measurement invariance with WLSMV does not suffice. It works fine with ML tough...
Thanks for your insights,
João Marôco
[Sent from mulher not that smart smartphone, with more errors than usual]