Beta prior on latent variable correlations, when there are more than 2 latent variables

39 views
Skip to first unread message

Roy Levy

unread,
Apr 8, 2025, 12:20:07 AMApr 8
to blavaan
Hi blavaan team--

I've run into a problem when trying to specify beta priors for the correlations among more than 2 latent variables. This can be illustrated in the Holzinger & Swineford example used here:


The example there is

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(1,1)")*textual '

When I run that model, and look at the summary, it reports the prior for each of the three latent variable correlations as "lkj(1)". This is the case when specifying the model as above, or several variations (e.g., each of the pairings of latent variables explicitly having a beta prior, or the use of dpriors(). For completeness, running the code:

############################################
library(blavaan)
library(lavaan)

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(1,1)")*textual
'

bfit1 <- bcfa(
  HS.model,
  n.chains = 2,
  data = HolzingerSwineford1939
)

summary(bfit1)
############################################


gives a summary reporting that the lkj(1) prior was used for all latent variable correlations. This issue does not occur when only using 2 latent variables. For example the code:

############################################
HS.model.2 <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              #speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(4,1)")*textual
'

bfit2 <- bcfa(
  HS.model.2,
  n.chains = 2,
  data = HolzingerSwineford1939
)

summary(bfit2)
############################################

gives a summary reporting that the beta(1,1) prior was used. 

I'm confused by this. Ultimately I would like to specify beta priors for the latent variable correlations when there are more than 2 latent variables, and am unsure why that's not enacted here, or how to enact it. Can you clarify? 

My apologies if this has been addressed elsewhere and I have missed it.

Thanks,
Roy

Ed Merkle

unread,
Apr 8, 2025, 2:17:04 PMApr 8
to Roy Levy, blavaan
Hi Roy,

Thanks for this report... the example at that url was written long ago, and the priors for covariance parameters are a bit in flux right now. Below is some more info, and we could use your feedback here.

I am worried that, when we have a 3x3 or larger covariance matrix, the individual beta priors lead to combinations of correlations that are not positive definite. So the actual priors being used are not exactly the stated beta distributions. This issue is described more in the "positive definite constraints" section here:


But we still need to use those beta priors when the covariance matrix has some restrictions (say, some residual correlations free and others fixed to 0). So here is what currently happens:

1. If a model covariance matrix is unrestricted, use lkj on the correlations (with variances receiving separate priors).

2. If the covariance matrix can be permuted to be block diagonal with unrestricted blocks, use an lkj on each block. For more on that, see https://ecmerkle.github.io/cs/blockmat.html

3. If the covariance matrix (or some block) has parameter restrictions, revert to beta priors.


Now, where we could use your feedback: is there a reason to keep the beta priors for unrestricted covariance matrices? I guess it maybe gives your more control over the priors of individual parameters, but it also becomes more difficult to declare your actual priors. We could probably add an "override" switch here that goes back to beta priors...

Ed
--
You received this message because you are subscribed to the Google Groups "blavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.

Message has been deleted

Roy Levy

unread,
Apr 11, 2025, 8:50:57 AMApr 11
to blavaan
Hi Ed,

Thanks for the feedback, including what currently happens and what you’re concerned about. I share your worries about individual beta priors leading to combinations of correlations that are not positive definite, and resolving such difficulties may lead to large discrepancies between the prior that is intended vs the prior that is actually enacted as you all discuss in that paper on opaque priors. I don’t have a general solution, and the only ideas I have for that are extremely computationally intensive, and I haven’t pursued them.

To your question, there are two reasons I’d like to keep the beta priors for unrestricted covariance matrices. The first reason is that there are situations where I have different prior beliefs about different correlations, in terms of magnitudes and possibly levels of uncertainty (e.g., I think F1 and F2 are correlated about .7,  and F1 and F3 are correlated about .2, or the previous and I’m pretty sure about the correlation between F1 and F2, but for F1 and F3 I'm less certain).

The second reason has to do with things I’ve encountered from time to time, and has something to do with the sign indeterminacies issues you discussed in the Opaque Priors paper. What I’ve found is that, even when the signs of the loadings are seemingly suitably constrained, I sometimes see multimodality in the posterior for other parameters that make me think its related to the sign indeterminacy. This happens with latent variable correlation posteriors. The multimodality is symmetric around 0 in ways that suggest something akin to a rotational indeterminacy that usually manifests in the symmetry of positive and negative regions of the posterior for loadings. (The effect is subtle, is something I’ve been meaning to explore for years but haven’t gotten around to it.) By using priors on individual correlations I could better constrain the situation to only get the “positive” correlation solution. So that’s one thing I’m after with putting individual (beta) priors on individual correlations. I suppose it could also be done with an inverse-Wishart, but that has its own conflicts (e.g., with fixed factor variances) and I confess I don't know if that's available in blavaan for latent variable covariance matrices. It’s possible some of this could be alleviated by a kind of relabeling algorithm. I believe such an algorithm is part of blavaan for loadings, but sometimes I see manifestations show up in latent variable correlations, and don’t know if such a relabeling algorithm would work there (or if I’m even right that it’s an analogous situation).

FWIW, I'd love to have an "override" button to allow for beta priors for individual correlations, understanding that it would be a "use at your own risk" tool. But I understand if that's just me  :)

Thanks,
Roy


Ed Merkle

unread,
Apr 11, 2025, 9:08:08 AMApr 11
to blavaan
Thanks, I just added a github issue about the override button. I suspect it will be straightforward to add it, but sometimes things are not as straightforward as I expect.

About relabeling: yes, this can (and should) be extended to latent correlations and other types of parameters. This happens in blavaan's Stan model, near the start of the generated quantities block. Basically, if we flip the sign of a loading, then we also need to worry about flipping the correlations that the latent variable is involved in, as well as regression relationships that the lv is involved in and the lv mean if it is free. The flipping of correlations and regressions is tricky because two latent variables are involved, so you could flip the sign once for the first lv and then back again for the second lv.

Ed

Roy Levy

unread,
Apr 17, 2025, 6:31:53 PMApr 17
to blavaan
Thanks Ed. 

Regarding relabeling: I see the complications with multiple latent variables. And the situation I'm finding is a little more complicated in that I see some sign flipping of certain parameters (latent variable correlations), but not other parameters that I would think ought to also flip (loadings). I have suspicions that it's related to sign indeterminacies, but confess I haven't put the energy into figuring it all out yet. 

Appreciate the discussion,
Roy

Reply all
Reply to author
Forward
0 new messages