Configural factor analysis with different observed variables

24 views
Skip to first unread message

Anniek Borghuis

unread,
Jan 31, 2019, 10:48:37 AM1/31/19
to lavaan
Dear Lavaan group,

I am working on a confirmatory factor analysis (only to the configural level) in a dataset with two versions of an empathy scale: 

-The first scale is the original empathy scale, measured on a 5-point Likert scale
-The second scale is a simplified version of the original empathy scale (suited for individuals with a mild intellectual disability) and is measured on a 3-point Likert scale

The number of items and supposed factor structure of both scales are the same, the only difference is the wording of the items. The goal is therefore to prove configural invariance
for the scales. The problem is that R won't run a configural model because of the different observed variables in both groups. 

Can anyone help me solve this issue? 

Thanks,
Anniek



Emmanuel W

unread,
Jan 31, 2019, 11:55:06 AM1/31/19
to lavaan
Dear Anniek,

I'm not a master of lavaan.

However, if the two scales have the same number of items, the number of variables should be the same too. Maybe you could give to us your R code...

Emmanuel

Anniek Borghuis

unread,
Jan 31, 2019, 12:35:46 PM1/31/19
to lav...@googlegroups.com
Dear Emmanuel,

Thanks for your response. The number of variables is indeed the same. I will include the dataset, that will explain it. 
And (part of) my code is:

Model1<- '
group:1
COG_EMP =~ BES3 + BES6 + BES9 + BES10 + BES12 + BES14 + BES16 + BES19 + BES20
AFF_EMP =~ BES1 + BES2 + BES4 + BES5 + BES7 + BES8 + BES11 + BES13 + BES15 + BES17 + BES18

group:2
COG_EMP =~ TvA3 + TvA4 + TvA5 + TvA6 + TvA8 + TvA11 + TvA15 + TvA16 + TvA18
AFF_EMP =~ TvA1 + TvA2 + TvA7 + TvA9+ TvA10 + TvA12 + TvA13 + TvA14 + TvA17 + TvA19 + TvA20'

bes.res.conf <- cfa(Model1, data=Empathie_nieuw, 
                    estimator = "MLR",
                    group = "Vragenlijst")
summary(bes.res.conf)

The problem is, as you can see, that half of the dataset consists of the data on the original scale (BES) and the other
half is the data of the new scale (TvA). 

I hope this explains my problem

Anniek

Op do 31 jan. 2019 om 17:55 schreef Emmanuel W <ewie...@club.fr>:
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
Empathie_nieuw.csv

Christopher Desjardins

unread,
Jan 31, 2019, 12:44:10 PM1/31/19
to lav...@googlegroups.com
Are you wanting to test if the loadings are the same? If so, naming  the loadings should work. 

For example,
group:1
COG_EMP =~ lam1*BES3 + lam2*BES4 ...

group:2
COG_EMP =~ lam1*TvA3 + lam2*TvA4 ...
<Empathie_nieuw.csv>

Emmanuel W

unread,
Jan 31, 2019, 12:59:09 PM1/31/19
to lavaan
I wonder if you can do what you want when the variables don't have the same names in the two groups...

In addition, you have missing data (and a "1" for TvA10 of Proefpersoon "96" in the file you sent). But it's a little beyond my competence. Let's see if anyone can answer better.

Emmanuel

car...@web.de

unread,
Jan 31, 2019, 2:04:04 PM1/31/19
to lav...@googlegroups.com
@Emmanual: I can't imagine that either (different variable names). With Amos this would certainly not work. Furthermore, I wonder if ML(R) makes sense for a three-point scale.
Am 31.01.19, 18:59, Emmanuel W <ewie...@club.fr> schrieb:

Christopher David Desjardins

unread,
Jan 31, 2019, 3:06:05 PM1/31/19
to lavaan

Am I missing something or did my message get missed? Can’t you just do this:

Empathie_nieuw <- read.csv("~/Downloads/Empathie_nieuw.csv", sep = ";", na.strings = c(""))
library(lavaan)
Model1<- '
group:1
COG_EMP =~ lam1*BES3 + lam2*BES6 + lam3*BES9 + lam4*BES10 + lam5*BES12 + lam6*BES14 + lam7*BES16 + lam8*BES19 + lam9*BES20
AFF_EMP =~ lam10*BES1 + lam11*BES2 + lam12*BES4 + lam13*BES5 + lam14*BES7 + lam15*BES8 + lam16*BES11 + lam17*BES13 + lam18*BES15 + lam19*BES17 + lam20*BES18

group:2
COG_EMP =~ lam1*TvA3 + lam2*TvA4 + lam3*TvA5 + lam4*TvA6 + lam5*TvA8 + lam6*TvA11 + lam7*TvA15 + lam8*TvA16 + lam9*TvA18
AFF_EMP =~ lam10*TvA1 + lam11*TvA2 + lam12*TvA7 + lam13*TvA9+ lam14*TvA10 + lam15*TvA12 + lam16*TvA13 + lam17*TvA14 + lam18*TvA17 + lam19*TvA19 + lam20*TvA20'

bes.res.conf <- cfa(Model1, data=Empathie_nieuw, 
                    estimator = "MLR",
                    group = "Vragenlijst")
summary(bes.res.conf)

Also, if your items are ordinal, you should be treating them as ordinal and not as continuous like you presently are.

Anniek Borghuis

unread,
Feb 5, 2019, 2:58:21 AM2/5/19
to lavaan
Thank you all for your answers. @Christopher, I just want to check for configural invariance. Factor loadings are not going to be the same, because of the different 
scales (5-point and 3-point Likert). 
Probably it is, as some of you say, indeed impossible to do a configural analysis with these data. 

Op donderdag 31 januari 2019 16:48:37 UTC+1 schreef Anniek Borghuis:

Terrence Jorgensen

unread,
Feb 5, 2019, 5:54:02 AM2/5/19
to lavaan
Factor loadings are not going to be the same, because of the different scales (5-point and 3-point Likert). 
Probably it is, as some of you say, indeed impossible to do a configural analysis with these data. 

Not necessarily.  Given your description, it sounds like the 3-point Likert categories would be something like {positive, neutral, negative}, perhaps using emoticons for {smiley face, neutral face, frowny face}.  5-point Likert categories are probably only slightly more nuanced, still with a center "neutral" category, but with differentiation between "slightly" and "very" positive/negative responses.  From the perspective of an Item Factor Analysis model, the thresholds which individuals with mild intellectual disability would need to "cross" from negative to neutral (or from neutral to positive) should be analogous to the thresholds that the general population would have to "cross" from slightly negative to neutral (or from neutral to slightly positive).  So you could start with a configural model in which those thresholds are constrained to equality across populations.  You don't have any information from the mildly disabled population to differentiate between different "levels" or "amounts" of negative/positive empathy, so those more extreme thresholds would simply remain freely estimated in the general population's model.  This is how you can use the same label to constrain the appropriate thresholds:

BES3 | t1 + gen3_t2*t2 + gen3_t3*t3 + t4 # likewise for each old item

TvA3 | gen3_t2*t1 + gen3_t3*t2 # likewise for each new item

Because two thresholds per item will be constrained to equality in your configural model, you would need to free each NEW item's intercept and residual variance for the mildly disabled group.  Note that this requires you to set parameterization = "theta" and use the ordered= argument to identify your items as discrete when you fit the model.

TvA3 ~ NA*1      # intercept
TvA3 ~~ NA*TvA3  # residual variance

Now, you can check whether your configural model fits well, on the assumption that those particular thresholds between categories are analogous between populations (which I think is quite a fair assumption).  If it fits well, you can then test more restrictive levels of invariance by constraining loadings, then intercepts, ...

But if because intercepts in the general population are fixed to 0 for identification, testing whether the intercepts are equivalent across groups requires simply removing the freed intercepts from the syntax (or explicitly changing them to TvA3 ~ 0*1).  Likewise, if you want to test equivalence of residual variances, that will require simply removing the freed variances from the model syntax (or explicitly changing them to TvA3 ~~ 1*TvA3).

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Reply all
Reply to author
Forward
0 new messages