Interesting. I just tried comparing the Std.lv column with std.lv=TRUE
using the ?cfa
example, and found the same issue. Thanks for point this out.
The problem is due to a new argument, which provides a third option to set the latent scale: effects.coding="loadings"
The previous behavior set the value of auto.fix.first
as the opposite of std.lv
(so if one was TRUE, the other was FALSE). Now, when you set std.lv=TRUE
, it leaves auto.fix.first=TRUE
. In your output, did you notice that the latent variances were all 1, but so were all the first factor loadings? So the reason results were inconsistent is probably that you were comparing models that were not statistically equivalent, because with std.lv=TRUE
, there were additional constraints.
For now, you can get around this by explicitly setting std.lv=TRUE
. I will bring this to Yves' attention to update the internal checks, to make sure the original behavior automatically sets auto.fix.first=FALSE