What Terrence said, in terms of methodology, is approximately true.
Instead of saying "unbiased", he should have said, "asymptotic". Very
few things are technically unbiased in statistics; the estimate of the
mean is when we are talking about i.i.d. data, and Horvitz-Thompson
survey estimator of the totals is when we are talking about data with
no nonresponse. Everything else is biased, period. Estimates and
standard errors that we get in SEM are biased, biased, biased, biased.
You may get an unbiased estimate of the means and the covariance
matrix, and then you twist them and screw them and stretch them and
reverse them with nonlinear maximization and the computations that the
standard errors require on top of that. (Read Browne 1984 to get a
feeling of just how complex the computations are.)
There are two main issues involved. First, the standard errors for
standardized coefficients are obtained through the delta method, and
with all the nonlinearities that are involved with that, the promised
land of asymptotia is pushed a little bit further away. Any
nonlinearity introduces or exacerbates small sample biases, and
introduces or blows up higher order moments like skewness and kurtosis
of the sampling distributions. Take a log of a positively distributed
variable, and the mean of the logs is not equal to the log of the mean
(read up on the lognormal distribution). What the delta method says is
that as the sampling distribution get tighter around the population
value (which we hope to achieve with larger sample sizes), the
variance of the nonlinear transformation gets tighter, as well, in
some predictable fashion. However, this is only an approximation --
and as an approximation, by Murphy's law, it is usually biased down,
making the standard errors anticonservative. Note that the standard
errors for unstandardized coefficients are themselves obtained by the
delta method approximations, and as such are less reliable than say
the standard errors for the item means (as if anybody is interested in
that trivia). So unstandardized standard errors aren't that great to
begin with, but by convoluting things and producing standardized
coefficients, you make things even less great.
The second issue is that most analytical derivations (have to) assume
that the model is correctly specified. When it isn't, things may get
iffy. If the model is not correctly specified, then you may end up
dividing your coefficient by a wrong quantity. In a more subtle way,
when the SEM model is incorrect, ADF/WLS standard errors are too
small. They are not necessarily disastrously small, but enough to get
worried about (in my simulations, the CIs had something like upper 80%
coverage; in some disastrous examples in other areas of statistics, I
have seen simulated 20% actual coverage).
Terrence points to the possibility of transforming the unstandardized
CIs, or by using likelihood profiles. These are good ideas -- but then
again the likelihood profile mostly makes sense with the model is
exactly right, and you have a multivariate normal distribution to deal
with. With non-normal data, you don't have that luxury.
-- Stas Kolenikov, PhD, PStat (ASA, SSC) @StatStas
-- Senior Scientist, Abt Associates @AbtDataScience
-- Program Chair (2018), Survey Research Methods Section of the
American Statistical Association
-- Opinions stated in this email are mine only, and do not reflect the
position of my employer
--
http://stas.kolenikov.name
> --
> You received this message because you are subscribed to the Google Groups
> "lavaan" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
lavaan+un...@googlegroups.com.
> To post to this group, send email to
lav...@googlegroups.com.
> Visit this group at
https://groups.google.com/group/lavaan.
> For more options, visit
https://groups.google.com/d/optout.