Reverse coding when testing for measurement invariance

22 views
Skip to first unread message

Timothy Wong

unread,
Feb 11, 2026, 6:55:02 PM (9 days ago) Feb 11
to lavaan
Hello lavaan group,

I have conducted a CFA with 9 ordinal items and 1 common factor, and am using the WLS estimator with robust means and variances. My N = 2583. I have established that the model is a good fit to the data with no groups defined, and would now like to establish measurement invariance across several grouping variables.

2 of the 9 items are negatively worded, i.e. lower scores on these items would correspond to a higher latent score. I am wondering, should I reverse code these items prior to running measurement invariance testing? Will leaving them negatively worded meaningfully impact the findings of invariance testing?

Thanks

Edward Rigdon

unread,
Feb 12, 2026, 9:35:33 AM (8 days ago) Feb 12
to lav...@googlegroups.com
     IF the two items simply carry opposite signs but load on a common factor that represents precisely the same construct, then the negative loadings should have no impact on measurement invariance testing. My concern relates to whether oppositely valenced items do relate to precisely the same construct. There is some literature regarding whether constructs like Satisfaction / Dissatisfaction, Love / Hate, Prosperity / Poverty, etc., are two poles of a continuum or are in fact distinct psychological variables. The latter view suggests that such constructs range only from 0 to +infinity, not from -infinity to +infinity. One might also suspect a sigmoid function at the extremes of such a construct.
     In the past, I have encouraged researchers to use reverse-worded items as attention checks but then to simply discard them, as I suspect the items are generally contaminated. If you want to keep them--whether you reverse-code them or not--you will very likely want to include a residual covariance for these two items. You might actually try including a secondary factor, just to see if the factor could provide some additional predictive ability--which may quite possibly vary by group. However, if the factor does not predict, then the model will fail identification--and the loadings may be quite weak, though strong enough to imply poor fit if this additional covariance is not permitted.
     The negative loadings will affect calculation of Cronbach's alpha, and maybe composite reliability, depending on exactly how you calculate that, but for calculation purposes you could just reverse the signs of the loadings manually.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/lavaan/dac3b884-203b-4104-b3c8-edae1bfbf09en%40googlegroups.com.

Timothy Wong

unread,
Feb 12, 2026, 8:13:12 PM (8 days ago) Feb 12
to lavaan
Thanks for the detailed reply Edward. I hadn't considered testing a model with two factors (one with positively-worded and the other with negatively-worded items). After specifying residual covariance between the two negatively-worded items, the fit of the model improved according to all the conventional fit statistics (chi-squared, CFI, TLI, RMSEA etc.)

I figured that calculating scale reliability would require all loadings to be in the same direction. I tried using semTools::compRelSem() on my model (containing negative factor loadings due to negatively-worded items) and it gave me a lower-than expected omega total for the general factor. Though I was able to fix this by manually extracting the lambda matrix (along with psi and theta), flipping the negative loadings as you suggested, and plugging them back into the inner functions of compRelSem().
Reply all
Reply to author
Forward
0 new messages