Model 1: RMSEA=0.135 [0.125; 0.145], SRMR=0.059, CFI=0.871, TLI=0.831, AIC=18633.493, BIC=18743.194
Model 2: RMSEA=0.141 [0.130; 0.153], SRMR=0.058, CFI=0.875, TLI=0.834, AIC=16891.818, BIC=16987.807
Model 1
Class: lavaan
Call: lavaan::lavaan(model = m1, data = data.set, model.type = "cfa", ...
Model 2
Class: lavaan
Call: lavaan::lavaan(model = m2, data = data.set, model.type = "cfa", ...
Variance test
H0: Model 1 and Model 2 are indistinguishable
H1: Model 1 and Model 2 are distinguishable
w2 = 0.559, p = 1.8e-09
Non-nested likelihood ratio test
H0: Model fits are equal for the focal population
H1A: Model 1 fits better than Model 2
z = -132.171, p = 1
H1B: Model 2 fits better than Model 1
z = -132.171, p = < 2.2e-16
Model 1
Class: lavaan
Call: lavaan::lavaan(model = m1, data = data.set, model.type = "cfa", ...
AIC: 25452.965
BIC: 25562.666
Model 2
Class: lavaan
Call: lavaan::lavaan(model = m2, data = data.set, model.type = "cfa", ...
AIC: 20167.351
BIC: 20263.340
95% Confidence Interval of AIC difference (AICdiff = AIC1 - AIC2)
5207.322 < AICdiff < 5363.905
95% Confidence Interval of BIC difference (BICdiff = BIC1 - BIC2)
5221.035 < BICdiff < 5377.617
Do you think that in this case the vuong-test is actually a reasonable way to compare the models? The extreme p-values appear suspicious... And if no, what could you possibly recommend to me?
Model 1: Two factors with 5 items each and in addition one item that is allowed to load on both factors (11 items)
Model 2: Two factors with 5 items each without the additional item (10 items).
Do you think that in this case the vuong-test is actually a reasonable way to compare the models?