Skip to first unread message

Jul 27, 2019, 8:03:05 AM7/27/19

to lavaan

Dear all,

I am trying to model a mediation using SEM. It includes one latent mediator (derived from three different variables).

My model specification is as follows:

model1 <- 'Y ~ b * factor + c * X

factor ~ a * X

factor =~ f1 + f2 + f3

indirect := a * b

direct := c

total := c + (a * b)'

fit1 <- sem(model1, data = mydata)

Although the model can be plotted correctly and the direct, indirect and total path is estimated and evaluated (p values), I get the following warnings:

> fit1 <- sem(model1, data = mydata)

Warning messages:

1: In lav_data_full(data = data, group = group, cluster = cluster, :

lavaan WARNING: some observed variances are (at least) a factor 1000 times larger than others; use varTable(fit) to investigate

2: In lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, :

lavaan WARNING:

Could not compute standard errors! The information matrix could

not be inverted. This may be a symptom that the model is not

identified.

3: In lavaan::lavaan(model = model1, data = mydata, model.type = "sem", :

lavaan WARNING: not all elements of the gradient are (near) zero;

the optimizer may not have found a local solution;

use lavInspect(fit, "optim.gradient") to investigate

How can I deal with those warnings?

If you need any further info, please feel free to ask.

Thanks a lot in advance!!

Maria

Jul 28, 2019, 10:04:43 AM7/28/19

to lavaan

Maria,

First, follow the instructions in the warning and look at the second last column of the table to see the variances of your observed variables.

First, follow the instructions in the warning and look at the second last column of the table to see the variances of your observed variables.

> lavaan WARNING: some observed variances are (at least) a factor 1000 times larger than others; use varTable(fit) to investigate

Second, make new versions of variables with dissimilar variances by multiplying them by multiples of 10 to minimize the differences in the variances across variables. See Kline 2016 pages 81-82 for a detailed explanation (Principles and Practice of Structural Equation Modeling, 4th edition).

------------------------

Keith A. Markus

John Jay College of Criminal Justice, CUNY

http://jjcweb.jjay.cuny.edu/kmarkus

Frontiers of Test Validity Theory: Measurement, Causation and Meaning.

http://www.routledge.com/books/details/9781841692203/

------------------------

Keith A. Markus

John Jay College of Criminal Justice, CUNY

http://jjcweb.jjay.cuny.edu/kmarkus

Frontiers of Test Validity Theory: Measurement, Causation and Meaning.

http://www.routledge.com/books/details/9781841692203/

Jul 29, 2019, 8:19:02 AM7/29/19

to lavaan

Dear Keith A. Markus,

I actually tried that one, thank you (it did not affect the model). Do you have any idea about the second and third error?

Best,

Maria

Jul 29, 2019, 8:30:57 AM7/29/19

to lav...@googlegroups.com

The 2nd and 3rd warnings may both be tied to identification. The most likely cause in this model is that f1, f2 and f3 do not conform to a common factor model. The common factor "factor" has 3 variables exclusively dependent on it (plus 1 variable variable with shared dependency). To achieve identification, you will need at least 2 of those loadings (for f1, f2, f3) to be strong. Even though you have warning messages, parameter estimates are still available through lavaan:::summary(fit1). You will want to see that the three loadings are comparably large and not near 0. If you see something else, then maybe it is just a matter of starting values and you can sidestep the problem. But you could also look at the correlation matrix of f1, f2, f3 (using the cor() base function). The three variables must be substantially correlated, or else the factor model will not work.

--

You received this message because you are subscribed to the Google Groups "lavaan" group.

To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/195a1b90-3fad-48cd-9869-dc7db6410190%40googlegroups.com.

Jul 29, 2019, 11:37:25 AM7/29/19

to lavaan

Dear Edward,

Thanks for the answer - than indeed helped me a lot!

Although the 3 variables f1:f3 are correlated (all r btw .33 and .41), they load very differently on the common factor (btw 0 ad 1).

What I find odd is, that if I run a PCA only, extracting one factor (which is supported by a scree and all), f1:f3 all load comparably high (.75 to .80) on the extracted factor. I thought that I'd actually do the same/a similar thing by using the lavaan expression "factor =~ f1 + f2 + f3". Obviously that is not the case - thanks for the help with that!

I think I'll run a parallel mediation now using f1:f3 as mediators...

If I may ask a final follow up question on that: I want to compare different models with each other. Still, the models will only be just-identified though I'd want them to be over-identified to be able to compare their model fit. Do you think it is advisable to drop the covariances of variables if they are not statistically significant (if the model fit is improved by that?)?

Sorry for the question-storm - I'm a complete newbie to the path model topic....

Best and thank you all for your help (!!),

Maria

Am Montag, 29. Juli 2019 14:30:57 UTC+2 schrieb Edward Rigdon:

The 2nd and 3rd warnings may both be tied to identification. The most likely cause in this model is that f1, f2 and f3 do not conform to a common factor model. The common factor "factor" has 3 variables exclusively dependent on it (plus 1 variable variable with shared dependency). To achieve identification, you will need at least 2 of those loadings (for f1, f2, f3) to be strong. Even though you have warning messages, parameter estimates are still available through lavaan:::summary(fit1). You will want to see that the three loadings are comparably large and not near 0. If you see something else, then maybe it is just a matter of starting values and you can sidestep the problem. But you could also look at the correlation matrix of f1, f2, f3 (using the cor() base function). The three variables must be substantially correlated, or else the factor model will not work.

To unsubscribe from this group and stop receiving emails from it, send an email to lav...@googlegroups.com.

Jul 29, 2019, 1:29:16 PM7/29/19

to lav...@googlegroups.com

Karl Joreskog wrote a very keplful chapter that describes the differences between composite-based methods and factor-based methods. If you can grasp the proportionality constraints that are at the heart of factor-based methods, you will be able to intuit factor model results. If your 3 indicators do not have proportional correlations with the other variables in your model, then the factor model will perform poorly, even if the 3 are highly correlated among themselves.

If you impose constraints just to make models over-identified, then all you are testing is the constraints. You could just as well use the t-values associated with the parameter estimates. But don't turn one model into multiple models and then compare predictors across models. Include all predictors in all models. Otherwise, the parameter estimates in the incomplete models will be biased due to the excluded variables.

To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/b181bc00-1c8c-4b0e-a57b-0a9ce591edd7%40googlegroups.com.

Aug 7, 2019, 8:00:34 AM8/7/19

to lavaan

Thanks for the reply, Edward! That gave me some insights... I'm still not understanding how I can "achieve" a over-identification in a model, if my aim is to compare different models. Do you basically say I should not aim at that?

Am Montag, 29. Juli 2019 19:29:16 UTC+2 schrieb Edward Rigdon:

Karl Joreskog wrote a very keplful chapter that describes the differences between composite-based methods and factor-based methods. If you can grasp the proportionality constraints that are at the heart of factor-based methods, you will be able to intuit factor model results. If your 3 indicators do not have proportional correlations with the other variables in your model, then the factor model will perform poorly, even if the 3 are highly correlated among themselves.If you impose constraints just to make models over-identified, then all you are testing is the constraints. You could just as well use the t-values associated with the parameter estimates. But don't turn one model into multiple models and then compare predictors across models. Include all predictors in all models. Otherwise, the parameter estimates in the incomplete models will be biased due to the excluded variables.

To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/b181bc00-1c8c-4b0e-a57b-0a9ce591edd7%40googlegroups.com.

Aug 7, 2019, 1:05:48 PM8/7/19

to lav...@googlegroups.com

Maria--

Models are parsimonious when they explain relations among many variables using few parameters. Simple regression models, for example, are not parsimonious. They simply translate the covariance matrix of the observed variables into different terms. Common factor models can be highly parsimonious if a single common factor accounts for all covariance among a large number of observed variables. Three observed variabes is not "a large number" of indicators for one common factor--it is barely enough. If you had more indicators or if the model were otherwise more constrained, you would have positive degrees of freedom.

--Ed Rigdon

To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/6fe7f227-8200-4fc2-92d2-86d901f399be%40googlegroups.com.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu