Skip to first unread message

Jun 28, 2019, 7:57:12 AM6/28/19

to lavaan

Hello,

I want to estimate a bi-factor model including a method factor, but unfortunately only have the correlation matrix available.

I get that I need to somehow put restrictions on the (residual?) variances in order to obtain unbiased SEs and fit indices, but I can't seem to find information on how exactly to do this.

My model goes like this:

model <- '

a =~ 1*x1 + 1*x2 + 1*x3

b =~ 1*x4 + 1*x5 + 1*x6

c =~ 1*x7 + 1*x8 + 1*x9

method =~ 1*x1 + 1*x4 + 1*x7

g =~ 1*x1 + 1*x2 + 1*x3 + 1*x4 + 1*x5 + 1*x6 + 1*x7 + 1*x8 + 1*x9

x1 ~~ e1*x1

x2 ~~ e1*x2

x3 ~~ e1*x3

x4 ~~ e2*x4

x5 ~~ e2*x5

x6 ~~ e2*x6

x7 ~~ e3*x7

x8 ~~ e3*x8

x9 ~~ e3*x9'

Maybe someone has a hint or can point me towards a paper on how to correctly apply the restrictions here?

I found some information in a different thread here (https://groups.google.com/forum/#!topic/lavaan/sJuzT7obIpg), but I am not sure as to how the approach used there holds in my situation.

Thank you for your help!

Jun 29, 2019, 2:01:31 PM6/29/19

to lavaan

I get that I need to somehow put restrictions on the (residual?) variances in order to obtain unbiased SEs and fit indices, but I can't seem to find information on how exactly to do this.

The diagonal of a correlation matrix is fixed to 1 by definition, so the bias comes from having too many df by estimating residual variances instead of fixing them such that the model-implied total variances of indicators are 1. With many correlated factors, the constraints can be quite tedious to program, but bifactor models with orthogonal factors make it quite easy to calculate the total explain variance for each indicator as the sum of its squared factor loadings (assuming all factor variances are fixed to 1).

model <- 'a =~ 1*x1 + 1*x2 + 1*x3b =~ 1*x4 + 1*x5 + 1*x6c =~ 1*x7 + 1*x8 + 1*x9method =~ 1*x1 + 1*x4 + 1*x7g =~ 1*x1 + 1*x2 + 1*x3 + 1*x4 + 1*x5 + 1*x6 + 1*x7 + 1*x8 + 1*x9x1 ~~ e1*x1x2 ~~ e1*x2x3 ~~ e1*x3x4 ~~ e2*x4x5 ~~ e2*x5x6 ~~ e2*x6x7 ~~ e3*x7x8 ~~ e3*x8x9 ~~ e3*x9'

Well, the fact that you are imposing equality constraints that imply parallel indicators is problematic for the general case of constraining total variances to 1, only because you have a method factor in addition to the general factor and facets. But I'll try to show you the basic idea, until it breaks down.

Because your loadings are all fixed to 1, the explained variance of each indicator is simple the sum of the variances of factors onto which it loads. You just need to label those variances in your syntax too:

`a ~~ var.a*a`

b ~~ var.b*b

c ~~ var.c*c

g ~~ var.g*g

method ~~ var.m*method

Then add some user-defined parameters using the ":=" operator. Again, assuming orthogonal factors:

`## calculate explained variances`

explained1 := var.a + var.m + var.g

explained23 := var.a + var.g

explained4 := var.b + var.m + var.g

explained56 := var.b + var.g

...

## constrain residuals to yield total variance == 1

e1 == 1 - explained1

e1 == 1 - explained23 # PROBLEM

e2 == 1 - explained4

e2 == 1 - explained56 # PROBLEM

Because your factors loadings are all equal, the explained variance of x1 differs from that of {x2 and x3} only because x1 additionally includes var.m (it loads onto "method"). This is feature of your model battles with the fact that the residual variance of x1 ("e1") also equals that of {x2 and x3}. So although you *could* generally constrain a parameter in multiple ways, the equality of e1 for x1 and {x2 and x3} implies that the method variance "var.m" MUST == 0. Perhaps you can merely impose tau-equivalence (equal loadings, but not residual variances) to get around this.

Maybe someone has a hint or can point me towards a paper on how to correctly apply the restrictions here?

Discussed here, but very out-of-date about software capabilities to impose the correct constraints:

Terrence D. Jorgensen

Assistant Professor, Methods and Statistics

Research Institute for Child Development and Education, the University of Amsterdam

Jun 29, 2019, 2:50:59 PM6/29/19

to lavaan

Again, thank you very much!

Thinking about it, it makes perfect sense that the residual variances for the indicators also linked to the method factor should be different from those only based on g and facet - if we need to apply strict constraints because we analyse correlations. Of course, if the method variance is very small, model fit might still be adequate (in a case where covariances are analyzed and thus no constraints are needed).

Thinking about it, it makes perfect sense that the residual variances for the indicators also linked to the method factor should be different from those only based on g and facet - if we need to apply strict constraints because we analyse correlations. Of course, if the method variance is very small, model fit might still be adequate (in a case where covariances are analyzed and thus no constraints are needed).

Thanks again, I'll try to put your advice into practice!

Jun 29, 2019, 3:53:09 PM6/29/19

to lavaan

Standard errors are actually quite a bit smaller in the constrained model, and model fit is better. This is not what I expected, but I suppose it does not mean that I am doing something wrong?

I basically used the approach described by you, but allowed for different residuals for the indicators linked to the method factor and those not (for each facet).

So by applying these constraints, I evade the problems described by Cudeck (1989), right? Sorry for asking again, just want to really make sure I'm doing things the right way :)

Jun 30, 2019, 6:20:43 AM6/30/19

to lavaan

right?

Sounds right.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu