lavaan WARNING:
The variance-covariance matrix of the estimated parameters (vcov)
does not appear to be positive definite! The smallest eigenvalue
(= 3.520778e-15) is close to zero. This may be a symptom that the
model is not identified.
Daniel J. Laxman, PhD Postdoctoral Fellow Department of Human Development and Family Studies Utah State University Preferred email address: Dan.J....@gmail.com Office: FCHD West (FCHDW) 001 Mailing address: 2705 Old Main Hill Logan, UT 84322-2705
There are two groups and I'm interested in whether the effect of each predictor is equivalent across groups. I've added constraints forcing paths to be equivalent across groups, testing one or more paths at a time.
When I indicate "standardized=TRUE", I receive the following warning:lavaan WARNING: The variance-covariance matrix of the estimated parameters (vcov) does not appear to be positive definite! The smallest eigenvalue (= 3.520778e-15) is close to zero. This may be a symptom that the model is not identified.
but perhaps because some predictors are weakly associated with one another. Is this a correct interpretation?
However, constraining covariances to be equivalent across groups might increase power to test my hypotheses...
but I'm not sure how to test whether the 10! = 55 covariances are equivalent across groups in a way that is not at high risk of Type I errors.
exoNames <- lavNames(fit_9, "ov.x") # get predictor names
## specify all (co)variances
covstruc <- outer(exoNames, exoNames, function(x, y) paste(x, "~~", y))
satMod <- c(covstruc[lower.tri(covstruc, diag = TRUE)], # omit redundant
paste(exoNames, "~ 1")) # mean structure
## constrained
fit0 <- lavaan(satMod, data = I.Data_clean.cc.rev, group = "Group",
group.equal = c("residual.covariances","residuals"))
## unconstrained
fit1 <- lavaan(satMod, data = I.Data_clean.cc.rev, group = "Group")
## compare
anova(fit0, fit1)
All of this is to say that one way of addressing the problem is to not estimate covariances. Another may be to constrain covariances to be equivalent across groups where appropriate (i.e., model fit is not significantly worse) because the error does not appear when the model is fit to the data as a single group.
## no group moderation (each group has their own intercept)
mod0 <- lm(ImpPhen ~ as.factor(Group) + (MPC + MBC + ... + age), data=I.Data_clean.cc.rev)
## all effects moderated (each group also has their own slope)
mod1 <- lm(ImpPhen ~ as.factor(Group) + as.factor(Group):(MPC + MBC + ... + age), data=I.Data_clean.cc.rev)
anova(mod0, mod1)
There are two groups and I'm interested in whether the effect of each predictor is equivalent across groups. I've added constraints forcing paths to be equivalent across groups, testing one or more paths at a time.
Careful. When slopes differ across groups, that is equivalent to the predictor interacting with group (i.e., a product term in a single-group regression). So when you test equivalence of 1 slope, your results might differ depending on whether other slopes are constrained or not. This is analogous to the differences between tests using Types I, II, or III SS.
Is there a procedure for testing equivalence of slopes that avoids this problem?
Your model is identified. It is just multiple regression, there is nothing funny going on. But you can make your syntax a lot simpler by leaving out the exogenous covariances and letting fixed.x=TRUE simply grab the observed sample statistics for you. That frees you from the assumption that they are all normal (one of them is a dummy code, and MLR was only meant to resolve nonnormality of continuous variables).
I'll do that. Is this model still considered a path analysis, or would it be "multiple group multiple regression analysis" (or another term)? I think it would still technically be a path analysis, but I'm wondering if there is a more specific term.
If anything, I would expect severe multicollinearity to be a culprit. But the interpretation is that there is linear dependency among the parameter estimates, so one of them is redundant with another (or a set of others). If you run your regressions with lm(), you could use the car package's vif() function to investigate the multicollinearity possibility.
I checked for multicollinearity as part of my preliminary analyses, but found no concerns.
Thank you for your response.
Dan
Daniel J. Laxman, PhD Postdoctoral Fellow Department of Human Development and Family Studies Utah State University Preferred email address: Dan.J....@gmail.com Office: FCHD West (FCHDW) 001 Mailing address: 2705 Old Main Hill Logan, UT 84322-2705
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
Is there a procedure for testing equivalence of slopes that avoids this problem?
I'll do that. Is this model still considered a path analysis, or would it be "multiple group multiple regression analysis" (or another term)?
Thanks. I've set the model up with
fixed.x=TRUE, two groups (multiple group analysis), a single
outcome, and estimator = "MLR." Before adding constraints, RMSEA
is 0.000 (I believe because of perfect fit since it's
essentially a multiple regression model). Once I add constraints
(e.g., effect of a predictor constrained to equality across
groups), RMSEA is no longer necessarily 0.000 and df increases
with each constraint. Does it make sense to examine RMSEA for
this model? I would think that it would, but the 90% CI for
RMSEA indicates potential poor fit (e.g., [0.000, 0.134]) when I
add a constraint even though the Chi-squared difference test is
not significant.
The results of my analyses are pretty consistent in that adding across-group constraints to the model results in: (1) a non-significant chi-squared difference test, (2) an estimate for RMSEA that is 0 or very small (e.g., 0.029), and (3) a 90% CI for RMSEA that includes some larger numbers on the high end (e.g., [0.000, 0.155]). I'm not sure how to interpret # 3. It seems odd that RMSEA would indicate potential poor model fit when the Chi-squared difference test has not. Perhaps the 90% CI for RMSEA is not as useful for an analysis like the one I'm doing because the estimate is so imprecise?
Thanks,
Dan
Daniel J. Laxman, PhD Preferred Email: Dan.J....@gmail.com
Does it make sense to examine RMSEA for this model?
resid(fit, type = "cor")