Anouck,
LISREL developers attempted something like this based on the determinants of the covariance matrix and the residual covariance matrix. You can find criticism in Chapter 6 of Leslie Hayduk's book LISREL: Issues, debates and strategies (1996, JHU Press). The basic problem is that it involves reducing a multivariate phenomenon, assuming you have more than one endogenous variable, to a single number. Imagine accounting for X% of the variance in two orthogonal outcome variables. Now imagine the same X% accounted for in two perfectly correlated variables. How should the total variance explained score reflect such differences?
As an alternative, consider reporting a summary of the individual R-square values. At minimum, report their range. Better yet, report a five number summary to describe their distribution.
# The Holzinger and Swineford (1939) example
HS.model <- ' visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9 '
fit <- lavaan(HS.model, data=HolzingerSwineford1939,
auto.var=TRUE, auto.fix.first=TRUE,
auto.cov.lv.x=TRUE)
fitRsquares <- lavInspect(fit, what='rsquare')
summary(fitRsquares)[-4]
Keith
------------------------
Keith A. Markus
John Jay College of Criminal Justice, CUNY
http://jjcweb.jjay.cuny.edu/kmarkusFrontiers of Test Validity Theory: Measurement, Causation and Meaning.
http://www.routledge.com/books/details/9781841692203/