Hello,
thanks a lot for this great pakage! I am kindly asking for help concerning two quetions.
I was testing for measurement/structural invariance and aim at comparison of latent means between 4 groups and a test for moderation. My model is specified like this:
#SEMfinal
Model1 <- '
#define latent variables
Latent1 =~ x1+x2+x3
Latent2 =~ x4+x5+x6
Latent3 =~ x7+x8
Latent4 =~ x9+x10
#define structural relations
Latent1 ~ Latent2
Latent3 ~ Latent1 + Latent2
Latent4 ~ Latent1 + Latent2 + Latent3
x2 ~~ x3
'
Because most indicator variables are ordinal (Likert 4-scale) I proceed like this:
fit Model1 <- sem(Model1, data=Data1, ordered=c("x1" , "x2" , "x3" , "x4" , "x5" , "x6" , "x7"))
These are the results for the measurement/structural invariance procedure:
Number of observations per group
3 2353 2605
2 2422 2721
4 3248 3524
1 1194 1350
chisq df cfi rmsea group.equal=
1. Configural 764.749 112 0.992 0.050
2. Weak 919.505 130 0.990 0.051 ("loadings")
3. Strong 1.055.989 172 0.989 0.047 ("loadings","intercepts")
4. Mean 1.192.819 184 0.988 0.049 ("loadings","intercepts","means")
5. Structural 1.537.189 202 0.984 0.054 ("loadings","intercepts","means","regressions")
To evaluate invariance I would primarily look at cfi values, because of the dwls estimator and the rather large sample size. I would use cfi cutoff value from Meade et al. (2008) of .002. As I said before I aim at the comparison of latent means and a test for moderation. This leads me to the following issues:
1. 1. Is it defensible to report latent mean differences, even there is no decrease of the model fit (cfi in particular) between model 3 (Strong) and model 4 (Mean)?
2. 2. Is it defensible to assume a moderation effect, because of the decrease of cfi between model 4 (Mean) and Model 5 (Structural) concerning Meade`s (2008) cfi cutoff value?
I look forward to your answers and will be pleased to provide you with more detailed information about the model.
Thank you!
Reference:
Meade, A.W., Johnson, E.C., & Braddy, P.W. (2008): Power and sensitivity of alternative fit indices in tests of measurement invariance. Journal of Applied Psychology, 93, 568-592.
Model1 <- '
#define latent variables
Latent1 =~ x1+x2+x3
Latent2 =~ x4+x5+x6
Latent3 =~ x7+x8
Latent4 =~ x9+x10
#define structural relations
Latent1 ~ Latent2
Latent3 ~ Latent1 + Latent2
Latent4 ~ Latent1 + Latent2 + Latent3
x2 ~~ x3
'
I know this isn't your question, but... Your Latent1 factor is already just-identified with only 3 indicators. Adding a residual correlation between two of the indicators might make it empirically under-identified. I'm not sure whether embedding that measurement model within a larger model makes it identified, but it might be the case -- that is the case for the 2-indicator factors (whose parameters would be under-identified on their own, but are identified in a larger model with simple structure).
Because most indicator variables are ordinal (Likert 4-scale) I proceed like this:
In order to establish strong invariance, all location parameters in the measurement model must be constant across groups. The intercepts apply only to your continuous indicators (x8 - x10). You also need to test the threshold constraints for x1 - x7. You can do this at the same time as intercepts:2. Weak ("loadings")
3. Strong ("loadings","intercepts","thresholds")
Or you can do so in separate steps (the order doesn't matter).2. Weak ("loadings")
3. Strong(1) ("loadings","intercepts")
4. Strong(2) ("loadings","intercepts","thresholds")
5. Structural ("loadings","intercepts","means","regressions")
In order for structural regressions to be comparable across groups, the latent variables need to be on the same scale, so you need to first test those restrictions:5. Structural scale ("loadings","intercepts","thresholds","means","lv.variances")6. Structural relations ("loadings","intercepts","thresholds","means","lv.variances","regressions")If you can't constrain the latent (residual) variances to equality, you can still compare structural regressions by using phantom variables. Essentially, for each latent (residual) variance that can't be constrained, you define a second-order factor with variance fixed to 1, the residual variance of the first order factor fixed to zero, and freely estimate the "loading" (beta path), which will be the square root of your original latent (residual) variance. Then, you estimate regressions among the phantom variables, which are on the same (standardized) scale. If you need to implement this, here is a paper that employs that rather clever trick:
To evaluate invariance I would primarily look at cfi values, because of the dwls estimator and the rather large sample size. I would use cfi cutoff value from Meade et al. (2008) of .002.
Careful, that study was based on continuous data. The CFI is calculated from chi-squared, so I'm not sure why you think DWLS invalidates one but not the other. Certainly the large sample sizes will make the chi-squared sensitive to trivial differences, but read this recent paper about using change in CFI with ordinal indicators:
1. 1. Is it defensible to report latent mean differences, even there is no decrease of the model fit (cfi in particular) between model 3 (Strong) and model 4 (Mean)?
Not really, that's the point of testing the constraints. The null hypothesis is that those parameters do not differ across groups, and you failed to reject that null hypothesis. But remember, your comparison was not valid because you failed to constrain thresholds, so those differences in thresholds may have absorbed the misspecification of equal means if the null is really false. So there is still hope for rejection once you update your method :-)
2. 2. Is it defensible to assume a moderation effect, because of the decrease of cfi between model 4 (Mean) and Model 5 (Structural) concerning Meade`s (2008) cfi cutoff value?
See my note above about putting latent variables on the same scale across groups before making inferences about whether regression paths actually differ across groups. But once you update your method, yes, rejecting the null hypothesis of equal regression slopes means that the magnitude of at least one slope depends on group (i.e., interaction / moderation).Terry
Model1 <- '
#define latent variables
Latent1 =~ x1+x2+x3
Latent2 =~ x4+x5+x6
Latent3 =~ x7+x8
Latent4 =~ x9+x10
#define structural relations
Latent1 ~ Latent2
Latent3 ~ Latent1 + Latent2
Latent4 ~ Latent1 + Latent2 + Latent3
x2 ~~ x3
'
I know this isn't your question, but... Your Latent1 factor is already just-identified with only 3 indicators. Adding a residual correlation between two of the indicators might make it empirically under-identified. I'm not sure whether embedding that measurement model within a larger model makes it identified, but it might be the case -- that is the case for the 2-indicator factors (whose parameters would be under-identified on their own, but are identified in a larger model with simple structure).
Because most indicator variables are ordinal (Likert 4-scale) I proceed like this:
In order to establish strong invariance, all location parameters in the measurement model must be constant across groups. The intercepts apply only to your continuous indicators (x8 - x10). You also need to test the threshold constraints for x1 - x7. You can do this at the same time as intercepts:2. Weak ("loadings")
3. Strong ("loadings","intercepts","thresholds")
Or you can do so in separate steps (the order doesn't matter).2. Weak ("loadings")
3. Strong(1) ("loadings","intercepts")
4. Strong(2) ("loadings","intercepts","thresholds")
5. Structural ("loadings","intercepts","means","regressions")
In order for structural regressions to be comparable across groups, the latent variables need to be on the same scale, so you need to first test those restrictions:5. Structural scale ("loadings","intercepts","thresholds","means","lv.variances")6. Structural relations ("loadings","intercepts","thresholds","means","lv.variances","regressions")If you can't constrain the latent (residual) variances to equality, you can still compare structural regressions by using phantom variables. Essentially, for each latent (residual) variance that can't be constrained, you define a second-order factor with variance fixed to 1, the residual variance of the first order factor fixed to zero, and freely estimate the "loading" (beta path), which will be the square root of your original latent (residual) variance. Then, you estimate regressions among the phantom variables, which are on the same (standardized) scale. If you need to implement this, here is a paper that employs that rather clever trick:
To evaluate invariance I would primarily look at cfi values, because of the dwls estimator and the rather large sample size. I would use cfi cutoff value from Meade et al. (2008) of .002.
Careful, that study was based on continuous data. The CFI is calculated from chi-squared, so I'm not sure why you think DWLS invalidates one but not the other. Certainly the large sample sizes will make the chi-squared sensitive to trivial differences, but read this recent paper about using change in CFI with ordinal indicators:
1. 1. Is it defensible to report latent mean differences, even there is no decrease of the model fit (cfi in particular) between model 3 (Strong) and model 4 (Mean)?
Not really, that's the point of testing the constraints. The null hypothesis is that those parameters do not differ across groups, and you failed to reject that null hypothesis. But remember, your comparison was not valid because you failed to constrain thresholds, so those differences in thresholds may have absorbed the misspecification of equal means if the null is really false. So there is still hope for rejection once you update your method :-)
2. 2. Is it defensible to assume a moderation effect, because of the decrease of cfi between model 4 (Mean) and Model 5 (Structural) concerning Meade`s (2008) cfi cutoff value?
Sorry I am digging up old feed here.
You said in the previous thread that "In order for structural regressions to be comparable across groups, the latent variables need to be on the same scale, so you need to first test those restrictions:5. Structural scale ("loadings","intercepts","thresholds","means","lv.variances")6. Structural relations ("loadings","intercepts","thresholds","means","lv.variances","regressions")
I was wondering if group.equal = "means" is necessary to compare structural relations if the objective is not to compare latent mean differences but just regression coefficients?
I also read somewhere that metric invariace is sufficient for examining latent regression coefficient.
Literature on this seems a bit vague. Could I just clarify please? If possible, could you direct me to some references please?
If the objectives are both on examining latent mean differences and latent regression paths, does this work flow sound correct?For individual measurement models1. loadings2. loadings + intercepts3. loadings + intercepts + residual (not necessary)
If at least the first two hold, then link measurement models using sem pertaining to theory4. compare latent means
5. loadings + intercepts + (potetially residual) + lv.variances
(is lv.variances necessary as a precondition for the next step?)6. loadings + intercepts + (potetially residual) + lv.variances(?) + regression + lv.covariances
2. loadings + intercepts (scalar)
...
8. loadings + intercepts + regressions
More to step 8, I noticed that (for the non-reference groups) the unstandardised latent variable intercepts of exogenous variables shown in the summary output are the same as lavInspect (fit.model, "mean.lv"). However, for endogenous latent variables as well as mediating latent variables, the intercepts shown are different between the summary output and "mean.lv". Is it because one is the unadjusted mean ("mean.lv" approach, not controlled for predictors) and the other is the adjusted mean (summary output, controlled for predictors)?
For categorical indicators -- objective: compare latent means for one measurement model (one latent factor with both binary and polytomous indicators):
Using the Wu and Estabrook (2016) approach and measEq.syntax function to generate correct lavaan model syntax (parameterization = "delta" ):1. configural model: (ID.fac = "std.lv", ID.cat="Wu", group.equal="configural")
2. metric model: (ID.fac = "std.lv", ID.cat="Wu", group.equal=c("thresholds","loadings")) -- this model constrains intercept as zero for the reference group and free the other groups.
3. scalar model: (ID.fac = "std.lv", ID.cat="Wu", group.equal=c("thresholds","loadings","intercepts")) -- this model essentially constrains intercepts as zero for all groups.If the scalar model holds, it is possible to compare latent means across groups.
8. loadings + intercepts + regressionsAs long as you mean indicator intercepts in Step 2 and intercepts of endogenous common factors in Step 8, then that looks fine to me.
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/10ed09b5-4414-416a-96a2-812077824609n%40googlegroups.com.
8. loadings + intercepts + regressions
As long as you mean indicator intercepts in Step 2 and intercepts of endogenous common factors in Step 8, then that looks fine to me.
intercepts of endogenous common factors. I was wondering how it could be specified in lavaan?
...
I tried group.equal = c("loadings","intercepts","regressions","means").