# Two-level CFA and restriction of error variance of the between level

54 views

Feb 7, 2019, 7:04:18 AM2/7/19
to lavaan

I'm wondering why it could make sense to restrict the error variance at the between level in a two-level CFA (or SEM)?

This is done in Example 9.6 in the MPLUS manual and also repeated (marked as optional) in a slide-deck by Yves Rosseel on Multilevel Structural Equation Modeling with lavaan.

I'm using lavaan and I was wondering when this could make sense and what it implies?

I sometimes get negative error variances at the between level (i.e., Haywood cases) so a qualified constraint would be useful for me, but I don't want to do it without understanding its origin.

I'm having data on 15 items that I suppose belong to 3 factors and 20 clusters with about 2000 observations (distributed more or less balanced across the cluster, but not equally).

### Chandra Sekhar Singh

Feb 7, 2019, 7:47:44 AM2/7/19
Hi.
I need MPLUS software for my PhD data analysis.
I am using Multi-level modeling in my research. pls, help in this regard.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.

--

Best regards,

Chandra Sekhar Singh
PhD Scholar (Management)
ABV-Indian Institute of Information Technology & Management, Gwalior, M.P. India (Institute of National Importance)
Contact No. +91-9425718177

Feb 7, 2019, 10:46:14 AM2/7/19
to lavaan
maybe to add on my current model: The error variances in the between level model are extremely small:

Variances:
Estimate  Std.Err  z-value  P(>|z|)
.x1             0.001    0.001    0.789    0.430
.x2            -0.001    0.001   -0.921    0.357
.x3            -0.002    0.001   -2.772    0.006
.x4             0.000    0.001    0.041    0.967
.x5            -0.000    0.001   -0.590    0.555
.x6            -0.000    0.001   -0.117    0.907
.x7             0.001    0.001    0.623    0.533
.x8            -0.001    0.001   -1.572    0.116
.x9             0.000    0.001    0.314    0.754
.x10           -0.000    0.001   -0.383    0.702
.x11           -0.001    0.000   -2.440    0.015
.x12           -0.001    0.000   -2.265    0.024
.x13            0.001    0.001    0.764    0.445
.x14           -0.001    0.001   -1.532    0.125
.x15           -0.000    0.000   -1.228    0.219

### Terrence Jorgensen

Feb 8, 2019, 6:37:24 AM2/8/19
to lavaan

I sometimes get negative error variances at the between level (i.e., Haywood cases) so a qualified constraint would be useful for me, but I don't want to do it without understanding its origin.

If your items meet scalar ("strong") invariance across clusters, that implies residual variances == 0 at the between level.

Most of the estimates do not significantly differ from 0 (the one exception might be a Type I error: 1 out of 15 tests would not be a surprise).

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Feb 9, 2019, 8:50:19 AM2/9/19
to lavaan

Thank you for your excellent reference. I very much enjoyed reading it. The link between multi-level SEM and multi-group SEM was actually the one that solved my questions.

I indeed have strong / scalar invariance in the multi-group setting.

### Chao Xu

Feb 9, 2019, 9:01:38 PM2/9/19
to lavaan
Hi,

I have an issue that, if the error variance at the between level is not constrained to 0, my model become non-identifiable. What problem could this be due to? Would it be possible for you to share your model syntax? Thank you very much.