# extracting fitted data

38 views

### Andy

Apr 24, 2020, 5:41:35 PM4/24/20
to lavaan
Suppose i have a model :

X1 ~  a1p*a1 + b1p*b1 + v14p*v14
X2 ~  a2p*a2 + b2p*b2 + v21p*v21 + v24p*v24
X3 ~  a3p*a3 + b3p*b3 + v31p*v31 + v34p*v34
X4 ~ a4p*a4 + b4p*b4

I fit this and get parameter estimates for a1p, a2p, ..., b1p, b2p, ..., v21p etc

But now i want to reconstruct estimates of X1, ..., X4.

Other than writing my own code to do this, (which i have done, painstakingly) does lavaan output this?
If so how do i get at it.
I have read the tutorial and don't see anything relating to this which seems crazy!!
I am obviously missing something basic.
cheers

### Terrence Jorgensen

Apr 26, 2020, 3:45:43 AM4/26/20
to lavaan
But now i want to reconstruct estimates of X1, ..., X4

I don't understand what you are asking for.  Those are outcome variables, not parameters.  What information are you trying to find?

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

### Andy

Apr 26, 2020, 1:30:45 PM4/26/20
to lavaan
Not sure how else to explain this. Its pretty standard stuff.

X1, X2, ... are response variable (like in a regression model).
So... after i fit the model and get parameter estimates as defined in previous post how do i obtain X1_hat, Y1_hat?

Where are these output values in the fitted object?

### Terrence Jorgensen

Apr 27, 2020, 3:55:22 AM4/27/20
to lavaan
after i fit the model and get parameter estimates as defined in previous post how do i obtain X1_hat, Y1_hat?

Since you only have observed variables in your model, you could fit the models with the lm() function to use the fitted() or predict() methods for those objects.  But since you have different predictors of each outcome, the estimated slopes will differ between lavaan and OLS to the degree that your predictors (of different outcomes) are correlated.  For instance, if you had the same predictors of each outcome (a saturated model), your estimates from lavaan would be the same as from OLS:

`mod <- 'x1 ~ sex + ageyrx2 ~ sex + ageyr'summary(sem(mod, data = HolzingerSwineford1939, meanstructure = TRUE))summary(lm(x1 ~ sex + ageyr, data = HolzingerSwineford1939))summary(lm(x2 ~ sex + ageyr, data = HolzingerSwineford1939))`

SEM is trying to reproduce the observed covariances among all the variables in your model, so the restrictions (e.g., that a1 does not predict X2) force the algorithm to find estimates that reproduce some observed covariances (e.g., between a1 and X2) only indirectly via other parameters (covariance of a1 with a2 and slope of X2 on a2).

Where are these output values in the fitted object?

They aren't.  lavaan is an SEM program, so the criterion for estimation is to reproduce summary statistics (means and covariance matrices) rather than casewise observations.  The model-implied predictions are still returned by the fitted() method, just as they are for (generalized) linear (mixed) model objects in other packages, but the fitted values are the summary statistics implied by the model's parameter estimates.  Pretty standard stuff you can find in SEM textbooks :-)

As the ?lavPredict help page states: "this function can not be used to ‘predict’ values of dependent variables, given the values of independent values (in the regression sense)."  However, Jarrett Byrnes has put together some convenience functions for lavaan objects that treat the many linear models within an SEM as individual linear models are treated in other modeling frameworks:  https://groups.google.com/d/msg/lavaan/sqehkG13lQY/mqW3Q9J9AQAJ
Eventually, lavPredict() will include some capability to return "y_hat" for observed variables, but there are many details to work out (e.g., what to do when there are latent predictors, what to return for observed ordinal outcomes (latent-response probabilities, or y-hat of latent responses), what to do when conditional.x=TRUE, etc.), and I do not think this is currently a high programming priority.  But your model seems very straight-forward, so I'm sure Jarrett's code will work in your case.