--You received this message because you are subscribed to the Google Groups "blavaan" group.To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.To view this discussion visit https://groups.google.com/d/msgid/blavaan/96bca663-6b53-4577-8e82-224d91b9b7c0n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/blavaan/46091d47-3736-4c45-8cc0-e15d28c92cadn%40googlegroups.com.
Hi Ed,
Thank you very much for this example — it really helped clarify what setting fixed.x = FALSE does.
What I initially tried to achieve was a comparison of competing theoretical models (moderation, mediation, and independent effects) in which the same observed variables are included, but parameters not implied by a given theory are fixed to zero. For my theoretical question, however, this felt rather stringent: my goal is not so much to test whether constraining specific parameters to zero improves fit, but to evaluate which theoretical structure best explains the data. In that sense, the independent-effects model does not need the moderation effect to be exactly zero to be preferred; it might simply be favored on grounds of parsimony when predictive performance is similar.
Using this approach, my models were specified as follows (with zero-fixed paths included to keep the covariate structure identical across models):
Model1 <- "With these specifications, the models showed very similar predictive performance (LOO comparisons with z ELPD differences around .5), such that no clear favorite emerged, although one could argue that the independent-effects model would be preferred based on parsimony.
I now tried an alternative specification using fixed.x = FALSE, removing the zero-fixed paths (including the covariates) and allowing the likelihood to be defined over all observed variables:
Model1 <- "With this approach, Models 2 and 3 are preferred, with the mediation model slightly favored over the independent-effects model. However, across these models the posterior estimates consistently indicate that CR does not reliably predict BAG, nor does BAG predict cognition. Given this, it seems somewhat counterintuitive that the mediation model is favored.
Would you say that this latter comparison is therefore addressing a different question, namely, which model best explains the joint distribution of all observed variables, including BAG, whereas the former comparison focuses more directly on predictive performance for cognition?
And in that case, when using fixed.x = FALSE, is it still necessary to explicitly include the interaction term (e.g., CR × BAG) in all models to ensure comparability, or is that no longer required?
Sorry about the long message.. I hope it comes across well.
Many thanks again for your help with this, it has been very insightful.
Best,
Carolien
To view this discussion visit https://groups.google.com/d/msgid/blavaan/7793d9ee-44e9-463e-80ee-bd4ee978beefn%40googlegroups.com.