lavTables (G2, X2); RMSEA vs lavResiduals

86 views
Skip to first unread message

Fran

unread,
Aug 27, 2019, 8:03:29 AM8/27/19
to lavaan
Hi,

My model have 4 LVs and 1 observed (7 points-Likert scale, agreement). The outcome is really skewed, also the other variables are not normally distributed. I made the skewed outcome variable ordered and I am analysing the data with DWLS estimator (N=334).    

My fit indices: CFI 0.989, RMSEA 0.051, SRMR 0.059; Model Fit Test Statistic  178.868.    

I then checked residuals lavResiduals(): is the computation of SRMR based on these values?

Now I am looking at lavTables ()and not sure how to interpret the output below (Joreskog & Moustaki's paper didn't solve my doubts). Could you help me? 

> lavTables (fit, p.value=TRUE)
    id   lhs   rhs nobs row col obs
.freq obs.prop est.prop       X2
1    1 out_1 out_2  334   1   1        4    0.012    0.161   46.192
2    1 out_1 out_2  334   2   1        4    0.012    0.043    7.406
3    1 out_1 out_2  334   3   1        0    0.000    0.012    3.952
4    1 out_1 out_2  334   4   1        1    0.003    0.001    1.607
5    1 out_1 out_2  334   5   1        0    0.000    0.000    0.004
6    1 out_1 out_2  334   6   1        0    0.000    0.000    0.000
7    1 out_1 out_2  334   7   1        0    0.000    0.000    0.000
8    1 out_1 out_2  334   1   2        1    0.003    0.015    3.318
9    1 out_1 out_2  334   2   2        2    0.006    0.032    6.975
10   1 out_1 out_2  334   3   2        2    0.006    0.023    4.297
11   1 out_1 out_2  334   4   2        0    0.000    0.004    1.447
12   1 out_1 out_2  334   5   2        0    0.000    0.000    0.046
13   1 out_1 out_2  334   6   2        0    0.000    0.000    0.000
14   1 out_1 out_2  334   7   2        0    0.000    0.000    0.000
15   1 out_1 out_2  334   1   3        0    0.000    0.007    2.338
16   1 out_1 out_2  334   2   3        1    0.003    0.036   10.044
17   1 out_1 out_2  334   3   3        7    0.021    0.066   10.331
18   1 out_1 out_2  334   4   3        3    0.009    0.035    6.563
19   1 out_1 out_2  334   5   3        2    0.006    0.004    0.387
20   1 out_1 out_2  334   6   3        0    0.000    0.000    0.027
21   1 out_1 out_2  334   7   3        0    0.000    0.000    0.000
22   1 out_1 out_2  334   1   4        0    0.000    0.000    0.149
23   1 out_1 out_2  334   2   4        0    0.000    0.007    2.429
24   1 out_1 out_2  334   3   4        4    0.012    0.041    6.752
25   1 out_1 out_2  334   4   4       11    0.033    0.066    5.598
26   1 out_1 out_2  334   5   4        2    0.006    0.023    4.295
27   1 out_1 out_2  334   6   4        1    0.003    0.002    0.355
28   1 out_1 out_2  334   7   4        0    0.000    0.000    0.001
29   1 out_1 out_2  334   1   5        2    0.006    0.000  867.683
30   1 out_1 out_2  334   2   5        0    0.000    0.001    0.222
31   1 out_1 out_2  334   3   5        0    0.000    0.011    3.823
32   1 out_1 out_2  334   4   5        8    0.024    0.064    8.455
33   1 out_1 out_2  334   5   5       30    0.090    0.092    0.019
34   1 out_1 out_2  334   6   5        6    0.018    0.036    2.958
35   1 out_1 out_2  334   7   5        2    0.006    0.001    7.557
36   1 out_1 out_2  334   1   6        0    0.000    0.000    0.000
37   1 out_1 out_2  334   2   6        0    0.000    0.000    0.000
38   1 out_1 out_2  334   3   6        0    0.000    0.000    0.040
39   1 out_1 out_2  334   4   6        1    0.003    0.004    0.121
40   1 out_1 out_2  334   5   6        4    0.012    0.033    4.552
41   1 out_1 out_2  334   6   6       58    0.174    0.077   40.244
42   1 out_1 out_2  334   7   6        9    0.027    0.021    0.492
43   1 out_1 out_2  334   1   7        0    0.000    0.000    0.000
44   1 out_1 out_2  334   2   7        0    0.000    0.000    0.000
45   1 out_1 out_2  334   3   7        0    0.000    0.000    0.000
46   1 out_1 out_2  334   4   7        0    0.000    0.000    0.004
47   1 out_1 out_2  334   5   7        0    0.000    0.001    0.265
48   1 out_1 out_2  334   6   7        2    0.006    0.016    2.223
49   1 out_1 out_2  334   7   7      167    0.500    0.062 1041.618
50   2 out_1 out_3  334   1   1        2    0.006    0.134   40.701
51   2 out_1 out_3  334   2   1        2    0.006    0.040    9.627
52   2 out_1 out_3  334   3   1        2    0.006    0.023    4.286
53   2 out_1 out_3  334   4   1        1    0.003    0.008    1.076
54   2 out_1 out_3  334   5   1        0    0.000    0.001    0.453
55   2 out_1 out_3  334   6   1        1    0.003    0.000   23.221
56   2 out_1 out_3  334   7   1        1    0.003    0.000 1698.675
57   2 out_1 out_3  334   1   2        2    0.006    0.029    6.061
58   2 out_1 out_3  334   2   2        5    0.015    0.030    2.623
59   2 out_1 out_3  334   3   2        1    0.003    0.031    8.529

....more

> lavTables(fit, 2L, stat="G2", type = "table", p.value=TRUE)
      lhs   rhs nobs df      G2 G2
.pval
1   out_1 out_2  334 35 689.295       0
50  out_1 out_3  334 35 720.974       0
99  out_1 out_4  334 35 676.606       0
148 out_2 out_3  334 35 702.592       0
197 out_2 out_4  334 35 647.659       0
246 out_3 out_4  334 35 633.876       0

Thank you very much,

Fran

Terrence Jorgensen

unread,
Aug 28, 2019, 6:17:07 AM8/28/19
to lavaan
I then checked residuals lavResiduals(): is the computation of SRMR based on these values?

Yes, it is the average of the standardized mean and (co)variance residuals.  Or if type="bollen", it is the average of the standardized-mean and correlation residuals.  Sounds the same, but Bollen (1989) standardized observed statistics and standardized model-implied statistics separately, then took the difference (which will always make it look like variances are perfectly estimated, even when they are not).  Bentler (the default, EQS method) on the other hand, took unstandardized residuals (the basis of the regular RMR) and standardized those using the model-implied SDs.  Without restrictions on observed variables' residual variances, these 2 approaches are typically identical.
 
Now I am looking at lavTables ()and not sure how to interpret the output below (Joreskog & Moustaki's paper didn't solve my doubts). Could you help me? 

Both are based on 2-way tables, but the first one is the default option (type = "cells"), so you are seeing the number of people (in column obs.freq) and the equivalent proportion of the total sample (in column obs.prop) for each response pattern in each 2-way table.  For example, the top line tells you that there are 4 people (1.2% of your sample) who responded in the first category of both "out_1" and "out_2".  The second line tells you that 4 people responded 1 to "out_1" but 2 to "out_2", and so on.

est.prop is the estimate of obs.prop based on the model parameters, and X2 is the chi-squared statistic comparing these predicted vs. observed values to see if they are quite different.  Notice from the very large X2 values that the model as the hardest time reproducing extreme responses (e.g., when either "row" or "col" are an extreme category like 1 or 7).

The second one sets type = "table", so you are seeing a simultaneous test statistic (G2) that tests the equivalence of each entire 2-way table's observed and estimated proportions.  G2 is also asymptotically distributed as a chi-squared random variable (under the null hypothesis of perfect fit) with degrees of freedom indicated in the "df" column, but G2 deviates from a true chi-squared distribution when there are lots of zeros in a 2-way table, which is the case with your data).  It is similar to a chi-squared test of independence (see the ?chisq.test help page or any introductory statistics book), except the expected values are decided by the SEM parameters instead of derived from the observed-table's marginal cell counts.

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Fran

unread,
Aug 28, 2019, 1:02:55 PM8/28/19
to lavaan
Thank you VERY much for explaining me in detail the lavTable output, I woke up and said to my housemate: TERRENCE ANSWERED MEEE! 

My apologies for the off topic message :) 

 Fran
Reply all
Reply to author
Forward
0 new messages