Hi there!
I am currently comparing slopes for two different groups across a number of multi-group regression models in lavaan. I have managed to find two different ways of achieving this, but feel a bit confused as to which (if any) is more correct. Hope that someone in here could help me out. Thanks!
Method 1: Defining and testing parameter difference scores
One of the methods I've tried is to define difference score for the parameters as per below code. This obviously outputs estimates, z-scores and p-values as expected but I feel a bit confused about the null model here - what am I comparing against?
'# Regression
burnout_8 ~ c(a1,a2)*kon_8 + c(b1,b2)*alder_8 + c(c1,c2)*emodemands_8 + c(d1,d2)*qinsec_8 + c(e1,e2)*phydemands_8 + c(f1,f2)*autonomy_8 + c(g1,g2)*learning_8 + c(h1,h2)*leadership_8 + c(i1,i2)*likvarbe_8_r
burnout_8 ~ 1
#Coefficient differences
diff_kon := a1-a2
diff_alder := b1-b2
diff_emodemands := c1-c2
diff_qinsec := d1-d2
diff_phydemands := e1-e2
diff_autonomy := f1-f2
diff_learning := g1-g2
diff_leadership := h1-h2
diff_likvarbe := i1-i2'
Method 2: Specifying constrained vs. unconstrained model
The other method I've tried is to use the convenience functions for constraining parameters across the groups (group.equal and group.partial) to achieve one fully free model and one model where the slopes of interest are constrained to be equal across groups. Pretty much an MI approach. The models are then compared using LavTestLRT. This approach produces similar significance levels and with a clearer baseline model (for me...) but necessitates a model specification for each independent variable and much less convenient compared to method 1.
As mentioned, the two approaches produces similar results, but I feel a bit confused about what is happening in Method 1. Is there one method that is superior and "more correct" than the other?
Thanks.
Best regards,
Philip Ström,
PhD student, Stockholm university