Hi,
I'm estimating the correlation between “trait” and three different latent factors, F_unique, F_common, and F_original, each in a separate model. In the output (I can send it via e-mail if needed), I noticed that the ratio between the unstandardized estimate and its standard error (unstand/unstand_SE) differs substantially from the ratio between the standardized estimate and its standard error (STD_genotype/STD_genotype_SE). It seems that the reported p-value corresponds only to the unstandardized estimate.
I thought this difference might be due to the fact that the unstandardized estimates incorporate additional noise, as they also reflect the error in SNP-heritability estimates. Standardized estimates might not be affected in the same way. Is this interpretation correct, or is there another explanation for the discrepancy? Does it happen often that they are so different? For now, I’ve used the STD_genotype estimate and calculated the corresponding p-value using the z-score. Would you recommend this approach, or would you advise to stick with the unstandardized estimate? How would the interpretation be different?
Thanks for the assistance!
MODEL:
model <- glue(“F_common =~ NA * trait2 + trait1
F_unique =~ NA * trait1
F_common ~~ 1 * F_common
F_unique ~~ 1 * F_unique
F_common ~~ 0 * F_unique
trait2 ~~ 0 * trait2
trait1 ~~ 0 * trait1
trait2 ~~ 0 * trait1
F_common ~~ {trait3}
F_unique ~~ {trait3}
“)