Significance testing of regression slope differences across groups

115 views
Skip to first unread message

Philip Ström

unread,
Dec 7, 2023, 10:44:58 AM12/7/23
to lavaan
Hi there!

I am currently comparing slopes for two different groups across a number of multi-group regression models in lavaan. I have managed to find two different ways of achieving this, but feel a bit confused as to which (if any) is more correct. Hope that someone in here could help me out. Thanks!

Method 1: Defining and testing parameter difference scores
One of the methods I've tried is to define difference score for the parameters as per below code. This obviously outputs estimates, z-scores and p-values as expected but I feel a bit confused about the null model here - what am I comparing against?

'# Regression
         burnout_8 ~ c(a1,a2)*kon_8 + c(b1,b2)*alder_8 + c(c1,c2)*emodemands_8 + c(d1,d2)*qinsec_8 + c(e1,e2)*phydemands_8 + c(f1,f2)*autonomy_8 + c(g1,g2)*learning_8 + c(h1,h2)*leadership_8 + c(i1,i2)*likvarbe_8_r
                 
                  burnout_8 ~ 1

                  #Coefficient differences
                  diff_kon := a1-a2
                  diff_alder := b1-b2
                  diff_emodemands := c1-c2
                  diff_qinsec := d1-d2
                  diff_phydemands := e1-e2
                  diff_autonomy := f1-f2
                  diff_learning := g1-g2
                  diff_leadership := h1-h2
                  diff_likvarbe := i1-i2'

Method 2: Specifying constrained vs. unconstrained model
The other method I've tried is to use the convenience functions for constraining parameters across the groups (group.equal and group.partial) to achieve one fully free model and one model where the slopes of interest are constrained to be equal across groups. Pretty much an MI approach. The models are then compared using LavTestLRT. This approach produces similar significance levels and with a clearer baseline model (for me...) but necessitates a model specification for each independent variable and much less convenient compared to method 1.

As mentioned, the two approaches produces similar results, but I feel a bit confused about what is happening in Method 1. Is there one method that is superior and "more correct" than the other?

Thanks.

Best regards,
Philip Ström,
PhD student, Stockholm university


Christian Arnold

unread,
Dec 8, 2023, 8:32:57 AM12/8/23
to lavaan
Hi Philip,

this is more or less the same discussion that has been going on here: https://groups.google.com/g/lavaan/c/ETuzu58gOPM/m/HXPv7rc1AQAJ 

My 2 cents: Chi square difference tests are useful for omnibus hypotheses. If you want to evaluate each difference individually, the difference between the methods is negligible. However, method 1 is less computationally intensive and is more flexible. For example, you can apply the bootstrap to obtain confidence intervals (then the speed advantage is gone, but that is due to the bootstrap).

HTH

Christian

Philip Ström

unread,
Dec 21, 2023, 10:40:53 AM12/21/23
to lavaan
Thanks a lot for your response, Christian. Much appreciated.

I wish you a Merry Christmas and a Happy New Year.

All the best,
Philip
Reply all
Reply to author
Forward
0 new messages