Hi Marcus and Shu Fai,
here are my 2 cents, which may be a bit off from your question (hopefully not too far). Others may strongly disagree, but it's nice and perhaps helpful to have a discussion like this. If you want to focus on the effect size (almost a definitional question) of each IV on the DV, then I would compare the standardized differences. The squared standardized beta can be interpreted as unique variance explained. See here in a slightly different context:
https://quantpsy.org/pubs/lachowicz_preacher_kelley_2018.pdf. Implemented in the MBESS package:
https://cran.r-project.org/web/packages/MBESS/index.htmlI am seriously unsure whether this approach is finally satisfactory. Let's consider a simple case: Multiple regression with 3 IV explaining the DV. R^2 can then be calculated as follows:
R^2 = b1^2 + b2^2 + b3^2 + 2 * c12 * b1 * b2 +2 * c13 * b1 * b3 + 2 * c23 * b2 * b3
b... are the standardized betas and c... are the correlations between the corresponding IV. A demo with lavaan is attached below.
Unique variance explained may be defined as b...^2 (see Lachowicz, Preacher, Kelly linked above). The b... unfortunately also appear in other parts of the formula. It becomes particularly interesting if, for example, one of the b... sign is negative and a surpressor is present.
Against this (hopefully reasonably correctly) explained background, a number of methods have been developed to determine the relative effect sizes (importance) of the IV. The idea is (roughly) as follows: IV1 explains x1%, ... IVn explains xn% and sum(x1, ..., xn) = 1. Resampling (usually bootstrap) can be used to test whether x1, ... xn differ significantly. The main methods for this idea can be found in the relaimpo package:
https://cran.r-project.org/web/packages/relaimpo/index.htmlThe bad news: relaimpo does not work with latent variables. I haven't read the article linked above (Gu, 2022, Assessing the Relative Importance of Predictors in Latent Regression Models), but it sounds like Shapley decomposition was implemented there, which is one of the methods to determine the relative importance (beware of my half-knowledge). So that *might* be interesting to you. Detachted from the LV problem, here's my immature take on Shapley Decomposition: the method is theoretically sound since it is based on the work of Shapley, and thus draws on ideas from cooperative games theory. Shapley himself did not develop the method for multiple regression and I have doubts that all of Shapley's basic assumptions are actually met. Nevertheless, the method seems to create quite useful results (which can certainly be seen critically, though).
HTH
Christian
library(lavaan)
pop.model <- "
y ~ 0.30 * x1 + 0.4 * x2 + 0.3 * x3
x1 ~~ 0.7 * x2 + 0.3 * x3
x2 ~~ 0.4 * x3
"
set.seed(123)
data <- simulateData(pop.model, sample.nobs = 1000)
model <- "
y ~ b1 * x1 + b2 * x2 + b3 * x3
x1 ~~ c12 * x2 + c13 * x3
x2 ~~ c23 * x3
"
fit <- cfa(model, data)
# lavaan result
lavInspect(fit, "rsquare")["y"]
# By hand
pe <- parameterEstimates(fit, standardized = TRUE)
b1 <- pe[pe$label == "b1", "std.all"]
b2 <- pe[pe$label == "b2", "std.all"]
b3 <- pe[pe$label == "b3", "std.all"]
c12 <- pe[pe$label == "c12", "std.all"]
c13 <- pe[pe$label == "c13", "std.all"]
c23 <- pe[pe$label == "c23", "std.all"]
b1^2 + b2^2 + b3^2 + 2 * c12 * b1 * b2 +2 * c13 * b1 * b3 + 2 * c23 * b2 * b3