Beta prior on latent variable correlations, when there are more than 2 latent variables

55 views
Skip to first unread message

Roy Levy

unread,
Apr 8, 2025, 12:20:07 AM4/8/25
to blavaan
Hi blavaan team--

I've run into a problem when trying to specify beta priors for the correlations among more than 2 latent variables. This can be illustrated in the Holzinger & Swineford example used here:


The example there is

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(1,1)")*textual '

When I run that model, and look at the summary, it reports the prior for each of the three latent variable correlations as "lkj(1)". This is the case when specifying the model as above, or several variations (e.g., each of the pairings of latent variables explicitly having a beta prior, or the use of dpriors(). For completeness, running the code:

############################################
library(blavaan)
library(lavaan)

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(1,1)")*textual
'

bfit1 <- bcfa(
  HS.model,
  n.chains = 2,
  data = HolzingerSwineford1939
)

summary(bfit1)
############################################


gives a summary reporting that the lkj(1) prior was used for all latent variable correlations. This issue does not occur when only using 2 latent variables. For example the code:

############################################
HS.model.2 <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              #speed   =~ x7 + x8 + x9
              visual ~~ prior("beta(4,1)")*textual
'

bfit2 <- bcfa(
  HS.model.2,
  n.chains = 2,
  data = HolzingerSwineford1939
)

summary(bfit2)
############################################

gives a summary reporting that the beta(1,1) prior was used. 

I'm confused by this. Ultimately I would like to specify beta priors for the latent variable correlations when there are more than 2 latent variables, and am unsure why that's not enacted here, or how to enact it. Can you clarify? 

My apologies if this has been addressed elsewhere and I have missed it.

Thanks,
Roy

Ed Merkle

unread,
Apr 8, 2025, 2:17:04 PM4/8/25
to Roy Levy, blavaan
Hi Roy,

Thanks for this report... the example at that url was written long ago, and the priors for covariance parameters are a bit in flux right now. Below is some more info, and we could use your feedback here.

I am worried that, when we have a 3x3 or larger covariance matrix, the individual beta priors lead to combinations of correlations that are not positive definite. So the actual priors being used are not exactly the stated beta distributions. This issue is described more in the "positive definite constraints" section here:


But we still need to use those beta priors when the covariance matrix has some restrictions (say, some residual correlations free and others fixed to 0). So here is what currently happens:

1. If a model covariance matrix is unrestricted, use lkj on the correlations (with variances receiving separate priors).

2. If the covariance matrix can be permuted to be block diagonal with unrestricted blocks, use an lkj on each block. For more on that, see https://ecmerkle.github.io/cs/blockmat.html

3. If the covariance matrix (or some block) has parameter restrictions, revert to beta priors.


Now, where we could use your feedback: is there a reason to keep the beta priors for unrestricted covariance matrices? I guess it maybe gives your more control over the priors of individual parameters, but it also becomes more difficult to declare your actual priors. We could probably add an "override" switch here that goes back to beta priors...

Ed
--
You received this message because you are subscribed to the Google Groups "blavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.

Message has been deleted

Roy Levy

unread,
Apr 11, 2025, 8:50:57 AM4/11/25
to blavaan
Hi Ed,

Thanks for the feedback, including what currently happens and what you’re concerned about. I share your worries about individual beta priors leading to combinations of correlations that are not positive definite, and resolving such difficulties may lead to large discrepancies between the prior that is intended vs the prior that is actually enacted as you all discuss in that paper on opaque priors. I don’t have a general solution, and the only ideas I have for that are extremely computationally intensive, and I haven’t pursued them.

To your question, there are two reasons I’d like to keep the beta priors for unrestricted covariance matrices. The first reason is that there are situations where I have different prior beliefs about different correlations, in terms of magnitudes and possibly levels of uncertainty (e.g., I think F1 and F2 are correlated about .7,  and F1 and F3 are correlated about .2, or the previous and I’m pretty sure about the correlation between F1 and F2, but for F1 and F3 I'm less certain).

The second reason has to do with things I’ve encountered from time to time, and has something to do with the sign indeterminacies issues you discussed in the Opaque Priors paper. What I’ve found is that, even when the signs of the loadings are seemingly suitably constrained, I sometimes see multimodality in the posterior for other parameters that make me think its related to the sign indeterminacy. This happens with latent variable correlation posteriors. The multimodality is symmetric around 0 in ways that suggest something akin to a rotational indeterminacy that usually manifests in the symmetry of positive and negative regions of the posterior for loadings. (The effect is subtle, is something I’ve been meaning to explore for years but haven’t gotten around to it.) By using priors on individual correlations I could better constrain the situation to only get the “positive” correlation solution. So that’s one thing I’m after with putting individual (beta) priors on individual correlations. I suppose it could also be done with an inverse-Wishart, but that has its own conflicts (e.g., with fixed factor variances) and I confess I don't know if that's available in blavaan for latent variable covariance matrices. It’s possible some of this could be alleviated by a kind of relabeling algorithm. I believe such an algorithm is part of blavaan for loadings, but sometimes I see manifestations show up in latent variable correlations, and don’t know if such a relabeling algorithm would work there (or if I’m even right that it’s an analogous situation).

FWIW, I'd love to have an "override" button to allow for beta priors for individual correlations, understanding that it would be a "use at your own risk" tool. But I understand if that's just me  :)

Thanks,
Roy


Ed Merkle

unread,
Apr 11, 2025, 9:08:08 AM4/11/25
to blavaan
Thanks, I just added a github issue about the override button. I suspect it will be straightforward to add it, but sometimes things are not as straightforward as I expect.

About relabeling: yes, this can (and should) be extended to latent correlations and other types of parameters. This happens in blavaan's Stan model, near the start of the generated quantities block. Basically, if we flip the sign of a loading, then we also need to worry about flipping the correlations that the latent variable is involved in, as well as regression relationships that the lv is involved in and the lv mean if it is free. The flipping of correlations and regressions is tricky because two latent variables are involved, so you could flip the sign once for the first lv and then back again for the second lv.

Ed

Roy Levy

unread,
Apr 17, 2025, 6:31:53 PM4/17/25
to blavaan
Thanks Ed. 

Regarding relabeling: I see the complications with multiple latent variables. And the situation I'm finding is a little more complicated in that I see some sign flipping of certain parameters (latent variable correlations), but not other parameters that I would think ought to also flip (loadings). I have suspicions that it's related to sign indeterminacies, but confess I haven't put the energy into figuring it all out yet. 

Appreciate the discussion,
Roy

Marcus Waldman

unread,
Dec 19, 2025, 4:16:08 PM12/19/25
to blavaan
Hi blavaan team,

Was hoping to get a few bits of clarification on this issues. I've got something akin to a parallel process latent growth curve model using a high(ish) dimensional orthogonal polynomial basis. I'd like the "factors" associated "random effects" of the basis to be correlated. I was hoping to place a LKJ prior on this correlation among the factors with a non-centered parameterization to avoid the funnel issue and improve sampling efficiency. 

Is the LKJ prior the default? And is there an option for a non-centered parameterization?

Ed Merkle

unread,
Dec 19, 2025, 5:58:47 PM12/19/25
to blavaan
Marcus,

For target="stan", blavaan will assign LKJ priors if it can find unrestricted blocks of covariance matrices. If it cannot find unrestricted blocks, then it uses beta priors (which are also influenced by positive definite constraints). Some details about how blavaan searches for unrestricted blocks are here:


And the specific priors used should be listed in the model's summary() output. If you send a model specification, I might be able to give more detail about what is happening.

About non-centered parameterization: there is no option for this, especially because the random effects are already marginalized out of blavaan's model likelihood. If there are problems with your model's convergence or efficiency, I might be able to say more by seeing the specific model.

Ed

Marcus Waldman

unread,
Dec 21, 2025, 11:18:40 AM12/21/25
to Ed Merkle, blavaan
Thanks, Ed. I appreciate the information. The block diagonal search is really cool! 

I've pasted the model syntax below. I've simplified the model quite a bit so all growth factors are uncorrelated. Just a little context: it is glucose and insulin measurements during an oral glucose tolerance test (a two hour test with blood drawn every 30 minutes) at three different pregnancy periods in an RCT for women diagnosed with gestational diabetes.  These measurements are measured with error, with the coefficient of variation for glucose being CV = 3% and insulin as high as cv = 15%. (Measurement error is heteroskedastic in the raw scale (but roughly homoskedastic on the log scale) and reported as percentage of the "true score" in this field). It is standing practice to ignore measurement error in the health sciences when conducting downstream analyses, and I'm trying to convince them that they shouldn't for all the reasons we don't in SEM. 

I've successfully fit an equivalent multivariate multilevel model in "long format" to this data in brms. The below fits without convergence issues, chains mix well, and show adequate sampling efficiency: 
#
 # Define formulas with AR1 autocorrelation structure
#   bf_glucose0_ar1 = brms::bf(
#     log(glucose0) | mi() ~ treat*poly(t,4) + HOMA2B0*poly(t,4) + HOMA2S0*poly(t,4) + A1C0*poly(t,4) + IBI0*poly(t,4) + GBI0*poly(t,4) + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#   bf_insulin0_ar1 = brms::bf(
#     log(insulin0) | mi() ~ treat*poly(t,4) + HOMA2B0*poly(t,4) + HOMA2S0*poly(t,4) + A1C0*poly(t,4) + IBI0*poly(t,4) + GBI0*poly(t,4) + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#   bf_glucose1_ar1 = brms::bf(
#     log(glucose1) | mi() ~ treat*poly(t,4) + HOMA2B1*poly(t,4) + HOMA2S1*poly(t,4) + A1C0*poly(t,4) + IBI1*poly(t,4) + GBI1*poly(t,4) + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#   bf_insulin1_ar1 = brms::bf(
#     log(insulin1) | mi() ~ treat*poly(t,4) + HOMA2B1*poly(t,4) + HOMA2S1*poly(t,4) + A1C0*poly(t,4) + IBI1*poly(t,4) + GBI1*poly(t,4) + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#   bf_glucose2_ar1 = brms::bf(
#     log(glucose2) | mi() ~ treat*poly(t,4) + HOMA2B2*poly(t,4) + HOMA2S2*poly(t,4) + A1C2*poly(t,4) + IBI2*poly(t,4) + GBI2*poly(t,4)  + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#   bf_insulin2_ar1 = brms::bf(
#     log(insulin2) | mi() ~ treat*poly(t,4) + HOMA2B2*poly(t,4) +  HOMA2S2*poly(t,4) + A1C2*poly(t,4) + IBI2*poly(t,4) + GBI2*poly(t,4)  + (1 + poly(t,4)|a|gr(PTID, by = treat)), # + arma(time = time_numeric, gr = PTID, p = 1, q = 0),
#     decomp = "QR"
#   )
#
#
#   # Combine into multivariate formula
#   mvform_ar1 = mvbrmsformula(
#     bf_glucose0_ar1, bf_insulin0_ar1,
#     bf_glucose1_ar1, bf_insulin1_ar1,
#     bf_glucose2_ar1, bf_insulin2_ar1
#   ) + set_rescor(T)


I would like to move over to the SEM "wide" format  for its greater flexibility to jointly model time invariant measurements like free fatty acids or triglyceride levels, and to begin a study on formalizing the process for constructing "measurement models" of blood plasma samples, including evaluation using goodness of fit statistics like the BCFI. Below is a highly simplified growth curve model (no predictors and assumes the "growth factors" for the legendre basis are all independent). I'm running into issues with both the marginal likelihood and joint likelihood specifications:
- For the joint likelihood ("stanclassic") - A lot of convergence issues/divergent transitions, even if I set adapt_delta to .99
- For the marginal likelihood I get exceptions like the following thrown:  Exception: mdivide_left_spd: A is not symmetric. A[1,3] = 273247, but A[3,1] = 273247

For the marginal likelihood, this exception gets thrown on about 1% of samples, seems to be contained to the disturbance variance. The only thing is that it leads to "NA" values for these parameters at those samples and so I can't monitor things like the Reff, etc. (I get a "no sample return") My best guess is that  mdivide_left_spd is a matrix inversion for the model-implied variance covariance matrix used in the trace term and numeric overflow is occuring, but would love your thoughts. 

My question about a non-centered parameterization was geared to the joint likelihood where the factor variances are pretty small for the higher order terms--hence the funneling issue.

Of course the natural question is whether this model is overparameterized--and yes it is. But there are apparently physiological reasons for a quartic polynomial due to model biphasic trajectories where there are more than one peak in insulin and/or glucose. (..and I do get it to fit in brms).


# Latent growth factors with Legendre polynomial basis
f_glu0_intercept =~ 1*GLU00_0 + 1*GLU0_0 + 1*GLU30_0 + 1*GLU60_0 + 1*GLU90_0 + 1*GLU120_0
f_glu0_linear =~ -1*GLU00_0 + -1*GLU0_0 + -0.5*GLU30_0 + 0*GLU60_0 + 0.5*GLU90_0 + 1*GLU120_0
f_glu0_quadratic =~ 1*GLU00_0 + 1*GLU0_0 + -0.125*GLU30_0 + -0.5*GLU60_0 + -0.125*GLU90_0 + 1*GLU120_0
f_glu0_cubic =~ -1*GLU00_0 + -1*GLU0_0 + 0.4375*GLU30_0 + 0*GLU60_0 + -0.4375*GLU90_0 + 1*GLU120_0
f_glu0_quartic =~ 1*GLU00_0 + 1*GLU0_0 + -0.289062*GLU30_0 + 0.375*GLU60_0 + -0.289062*GLU90_0 + 1*GLU120_0

# Factor means
f_glu0_intercept ~ 1
f_glu0_linear ~ 1
f_glu0_quadratic ~ 1
f_glu0_cubic ~ 1
f_glu0_quartic ~ 1

# Factor variances
f_glu0_intercept ~~ f_glu0_intercept
f_glu0_linear ~~ f_glu0_linear
f_glu0_quadratic ~~ f_glu0_quadratic
f_glu0_cubic ~~ f_glu0_cubic
f_glu0_quartic ~~ f_glu0_quartic

# Factor covariances (set to zero for independent factors)
f_glu0_intercept ~~ 0*f_glu0_linear
f_glu0_intercept ~~ 0*f_glu0_quadratic
f_glu0_intercept ~~ 0*f_glu0_cubic
f_glu0_intercept ~~ 0*f_glu0_quartic
f_glu0_linear ~~ 0*f_glu0_quadratic
f_glu0_linear ~~ 0*f_glu0_cubic
f_glu0_linear ~~ 0*f_glu0_quartic
f_glu0_quadratic ~~ 0*f_glu0_cubic
f_glu0_quadratic ~~ 0*f_glu0_quartic
f_glu0_cubic ~~ 0*f_glu0_quartic

# Disturbances
GLU00_0 ~~ theta_glu*GLU00_0
GLU0_0 ~~ theta_glu*GLU0_0
GLU30_0 ~~ theta_glu*GLU30_0
GLU60_0 ~~ theta_glu*GLU60_0
GLU90_0 ~~ theta_glu*GLU90_0
GLU120_0 ~~ theta_glu*GLU120_0

# Indicator means (fixed to 0)
GLU00_0 ~ 0*1
GLU0_0 ~ 0*1
GLU30_0 ~ 0*1
GLU60_0 ~ 0*1
GLU90_0 ~ 0*1
GLU120_0 ~ 0*1



# Latent growth factors with Legendre polynomial basis
f_ins0_intercept =~ 1*INS00_0 + 1*INS0_0 + 1*INS30_0 + 1*INS60_0 + 1*INS90_0 + 1*INS120_0
f_ins0_linear =~ -1*INS00_0 + -1*INS0_0 + -0.5*INS30_0 + 0*INS60_0 + 0.5*INS90_0 + 1*INS120_0
f_ins0_quadratic =~ 1*INS00_0 + 1*INS0_0 + -0.125*INS30_0 + -0.5*INS60_0 + -0.125*INS90_0 + 1*INS120_0
f_ins0_cubic =~ -1*INS00_0 + -1*INS0_0 + 0.4375*INS30_0 + 0*INS60_0 + -0.4375*INS90_0 + 1*INS120_0
f_ins0_quartic =~ 1*INS00_0 + 1*INS0_0 + -0.289062*INS30_0 + 0.375*INS60_0 + -0.289062*INS90_0 + 1*INS120_0

# Factor means
f_ins0_intercept ~ 1
f_ins0_linear ~ 1
f_ins0_quadratic ~ 1
f_ins0_cubic ~ 1
f_ins0_quartic ~ 1

# Factor variances
f_ins0_intercept ~~ f_ins0_intercept
f_ins0_linear ~~ f_ins0_linear
f_ins0_quadratic ~~ f_ins0_quadratic
f_ins0_cubic ~~ f_ins0_cubic
f_ins0_quartic ~~ f_ins0_quartic

# Factor covariances (set to zero for independent factors)
f_ins0_intercept ~~ 0*f_ins0_linear
f_ins0_intercept ~~ 0*f_ins0_quadratic
f_ins0_intercept ~~ 0*f_ins0_cubic
f_ins0_intercept ~~ 0*f_ins0_quartic
f_ins0_linear ~~ 0*f_ins0_quadratic
f_ins0_linear ~~ 0*f_ins0_cubic
f_ins0_linear ~~ 0*f_ins0_quartic
f_ins0_quadratic ~~ 0*f_ins0_cubic
f_ins0_quadratic ~~ 0*f_ins0_quartic
f_ins0_cubic ~~ 0*f_ins0_quartic

# Disturbances
INS00_0 ~~ theta_ins*INS00_0
INS0_0 ~~ theta_ins*INS0_0
INS30_0 ~~ theta_ins*INS30_0
INS60_0 ~~ theta_ins*INS60_0
INS90_0 ~~ theta_ins*INS90_0
INS120_0 ~~ theta_ins*INS120_0

# Indicator means (fixed to 0)
INS00_0 ~ 0*1
INS0_0 ~ 0*1
INS30_0 ~ 0*1
INS60_0 ~ 0*1
INS90_0 ~ 0*1
INS120_0 ~ 0*1



# Latent growth factors with Legendre polynomial basis
f_glu1_intercept =~ 1*GLU00_1 + 1*GLU0_1 + 1*GLU30_1 + 1*GLU60_1 + 1*GLU90_1 + 1*GLU120_1
f_glu1_linear =~ -1*GLU00_1 + -1*GLU0_1 + -0.5*GLU30_1 + 0*GLU60_1 + 0.5*GLU90_1 + 1*GLU120_1
f_glu1_quadratic =~ 1*GLU00_1 + 1*GLU0_1 + -0.125*GLU30_1 + -0.5*GLU60_1 + -0.125*GLU90_1 + 1*GLU120_1
f_glu1_cubic =~ -1*GLU00_1 + -1*GLU0_1 + 0.4375*GLU30_1 + 0*GLU60_1 + -0.4375*GLU90_1 + 1*GLU120_1
f_glu1_quartic =~ 1*GLU00_1 + 1*GLU0_1 + -0.289062*GLU30_1 + 0.375*GLU60_1 + -0.289062*GLU90_1 + 1*GLU120_1

# Factor means
f_glu1_intercept ~ 1
f_glu1_linear ~ 1
f_glu1_quadratic ~ 1
f_glu1_cubic ~ 1
f_glu1_quartic ~ 1

# Factor variances
f_glu1_intercept ~~ f_glu1_intercept
f_glu1_linear ~~ f_glu1_linear
f_glu1_quadratic ~~ f_glu1_quadratic
f_glu1_cubic ~~ f_glu1_cubic
f_glu1_quartic ~~ f_glu1_quartic

# Factor covariances (set to zero for independent factors)
f_glu1_intercept ~~ 0*f_glu1_linear
f_glu1_intercept ~~ 0*f_glu1_quadratic
f_glu1_intercept ~~ 0*f_glu1_cubic
f_glu1_intercept ~~ 0*f_glu1_quartic
f_glu1_linear ~~ 0*f_glu1_quadratic
f_glu1_linear ~~ 0*f_glu1_cubic
f_glu1_linear ~~ 0*f_glu1_quartic
f_glu1_quadratic ~~ 0*f_glu1_cubic
f_glu1_quadratic ~~ 0*f_glu1_quartic
f_glu1_cubic ~~ 0*f_glu1_quartic

# Disturbances
GLU00_1 ~~ theta_glu*GLU00_1
GLU0_1 ~~ theta_glu*GLU0_1
GLU30_1 ~~ theta_glu*GLU30_1
GLU60_1 ~~ theta_glu*GLU60_1
GLU90_1 ~~ theta_glu*GLU90_1
GLU120_1 ~~ theta_glu*GLU120_1

# Indicator means (fixed to 0)
GLU00_1 ~ 0*1
GLU0_1 ~ 0*1
GLU30_1 ~ 0*1
GLU60_1 ~ 0*1
GLU90_1 ~ 0*1
GLU120_1 ~ 0*1



# Latent growth factors with Legendre polynomial basis
f_ins1_intercept =~ 1*INS00_1 + 1*INS0_1 + 1*INS30_1 + 1*INS60_1 + 1*INS90_1 + 1*INS120_1
f_ins1_linear =~ -1*INS00_1 + -1*INS0_1 + -0.5*INS30_1 + 0*INS60_1 + 0.5*INS90_1 + 1*INS120_1
f_ins1_quadratic =~ 1*INS00_1 + 1*INS0_1 + -0.125*INS30_1 + -0.5*INS60_1 + -0.125*INS90_1 + 1*INS120_1
f_ins1_cubic =~ -1*INS00_1 + -1*INS0_1 + 0.4375*INS30_1 + 0*INS60_1 + -0.4375*INS90_1 + 1*INS120_1
f_ins1_quartic =~ 1*INS00_1 + 1*INS0_1 + -0.289062*INS30_1 + 0.375*INS60_1 + -0.289062*INS90_1 + 1*INS120_1

# Factor means
f_ins1_intercept ~ 1
f_ins1_linear ~ 1
f_ins1_quadratic ~ 1
f_ins1_cubic ~ 1
f_ins1_quartic ~ 1

# Factor variances
f_ins1_intercept ~~ f_ins1_intercept
f_ins1_linear ~~ f_ins1_linear
f_ins1_quadratic ~~ f_ins1_quadratic
f_ins1_cubic ~~ f_ins1_cubic
f_ins1_quartic ~~ f_ins1_quartic

# Factor covariances (set to zero for independent factors)
f_ins1_intercept ~~ 0*f_ins1_linear
f_ins1_intercept ~~ 0*f_ins1_quadratic
f_ins1_intercept ~~ 0*f_ins1_cubic
f_ins1_intercept ~~ 0*f_ins1_quartic
f_ins1_linear ~~ 0*f_ins1_quadratic
f_ins1_linear ~~ 0*f_ins1_cubic
f_ins1_linear ~~ 0*f_ins1_quartic
f_ins1_quadratic ~~ 0*f_ins1_cubic
f_ins1_quadratic ~~ 0*f_ins1_quartic
f_ins1_cubic ~~ 0*f_ins1_quartic

# Disturbances
INS00_1 ~~ theta_ins*INS00_1
INS0_1 ~~ theta_ins*INS0_1
INS30_1 ~~ theta_ins*INS30_1
INS60_1 ~~ theta_ins*INS60_1
INS90_1 ~~ theta_ins*INS90_1
INS120_1 ~~ theta_ins*INS120_1

# Indicator means (fixed to 0)
INS00_1 ~ 0*1
INS0_1 ~ 0*1
INS30_1 ~ 0*1
INS60_1 ~ 0*1
INS90_1 ~ 0*1
INS120_1 ~ 0*1



# Latent growth factors with Legendre polynomial basis
f_glu2_intercept =~ 1*GLU00_2 + 1*GLU0_2 + 1*GLU30_2 + 1*GLU60_2 + 1*GLU90_2 + 1*GLU120_2
f_glu2_linear =~ -1*GLU00_2 + -1*GLU0_2 + -0.5*GLU30_2 + 0*GLU60_2 + 0.5*GLU90_2 + 1*GLU120_2
f_glu2_quadratic =~ 1*GLU00_2 + 1*GLU0_2 + -0.125*GLU30_2 + -0.5*GLU60_2 + -0.125*GLU90_2 + 1*GLU120_2
f_glu2_cubic =~ -1*GLU00_2 + -1*GLU0_2 + 0.4375*GLU30_2 + 0*GLU60_2 + -0.4375*GLU90_2 + 1*GLU120_2
f_glu2_quartic =~ 1*GLU00_2 + 1*GLU0_2 + -0.289062*GLU30_2 + 0.375*GLU60_2 + -0.289062*GLU90_2 + 1*GLU120_2

# Factor means
f_glu2_intercept ~ 1
f_glu2_linear ~ 1
f_glu2_quadratic ~ 1
f_glu2_cubic ~ 1
f_glu2_quartic ~ 1

# Factor variances
f_glu2_intercept ~~ f_glu2_intercept
f_glu2_linear ~~ f_glu2_linear
f_glu2_quadratic ~~ f_glu2_quadratic
f_glu2_cubic ~~ f_glu2_cubic
f_glu2_quartic ~~ f_glu2_quartic

# Factor covariances (set to zero for independent factors)
f_glu2_intercept ~~ 0*f_glu2_linear
f_glu2_intercept ~~ 0*f_glu2_quadratic
f_glu2_intercept ~~ 0*f_glu2_cubic
f_glu2_intercept ~~ 0*f_glu2_quartic
f_glu2_linear ~~ 0*f_glu2_quadratic
f_glu2_linear ~~ 0*f_glu2_cubic
f_glu2_linear ~~ 0*f_glu2_quartic
f_glu2_quadratic ~~ 0*f_glu2_cubic
f_glu2_quadratic ~~ 0*f_glu2_quartic
f_glu2_cubic ~~ 0*f_glu2_quartic

# Disturbances
GLU00_2 ~~ theta_glu*GLU00_2
GLU0_2 ~~ theta_glu*GLU0_2
GLU30_2 ~~ theta_glu*GLU30_2
GLU60_2 ~~ theta_glu*GLU60_2
GLU90_2 ~~ theta_glu*GLU90_2
GLU120_2 ~~ theta_glu*GLU120_2

# Indicator means (fixed to 0)
GLU00_2 ~ 0*1
GLU0_2 ~ 0*1
GLU30_2 ~ 0*1
GLU60_2 ~ 0*1
GLU90_2 ~ 0*1
GLU120_2 ~ 0*1



# Latent growth factors with Legendre polynomial basis
f_ins2_intercept =~ 1*INS00_2 + 1*INS0_2 + 1*INS30_2 + 1*INS60_2 + 1*INS90_2 + 1*INS120_2
f_ins2_linear =~ -1*INS00_2 + -1*INS0_2 + -0.5*INS30_2 + 0*INS60_2 + 0.5*INS90_2 + 1*INS120_2
f_ins2_quadratic =~ 1*INS00_2 + 1*INS0_2 + -0.125*INS30_2 + -0.5*INS60_2 + -0.125*INS90_2 + 1*INS120_2
f_ins2_cubic =~ -1*INS00_2 + -1*INS0_2 + 0.4375*INS30_2 + 0*INS60_2 + -0.4375*INS90_2 + 1*INS120_2
f_ins2_quartic =~ 1*INS00_2 + 1*INS0_2 + -0.289062*INS30_2 + 0.375*INS60_2 + -0.289062*INS90_2 + 1*INS120_2

# Factor means
f_ins2_intercept ~ 1
f_ins2_linear ~ 1
f_ins2_quadratic ~ 1
f_ins2_cubic ~ 1
f_ins2_quartic ~ 1

# Factor variances
f_ins2_intercept ~~ f_ins2_intercept
f_ins2_linear ~~ f_ins2_linear
f_ins2_quadratic ~~ f_ins2_quadratic
f_ins2_cubic ~~ f_ins2_cubic
f_ins2_quartic ~~ f_ins2_quartic

# Factor covariances (set to zero for independent factors)
f_ins2_intercept ~~ 0*f_ins2_linear
f_ins2_intercept ~~ 0*f_ins2_quadratic
f_ins2_intercept ~~ 0*f_ins2_cubic
f_ins2_intercept ~~ 0*f_ins2_quartic
f_ins2_linear ~~ 0*f_ins2_quadratic
f_ins2_linear ~~ 0*f_ins2_cubic
f_ins2_linear ~~ 0*f_ins2_quartic
f_ins2_quadratic ~~ 0*f_ins2_cubic
f_ins2_quadratic ~~ 0*f_ins2_quartic
f_ins2_cubic ~~ 0*f_ins2_quartic

# Disturbances
INS00_2 ~~ theta_ins*INS00_2
INS0_2 ~~ theta_ins*INS0_2
INS30_2 ~~ theta_ins*INS30_2
INS60_2 ~~ theta_ins*INS60_2
INS90_2 ~~ theta_ins*INS90_2
INS120_2 ~~ theta_ins*INS120_2

# Indicator means (fixed to 0)
INS00_2 ~ 0*1
INS0_2 ~ 0*1
INS30_2 ~ 0*1
INS60_2 ~ 0*1
INS90_2 ~ 0*1
INS120_2 ~ 0*1


# === Factor Covariances: Glucose with Insulin (Independence (all set to zero)) ===


# All cross-variable factor covariances constrained to zero
  f_glu0_intercept ~~ 0*f_ins0_intercept
  f_glu0_intercept ~~ 0*f_ins0_linear
  f_glu0_intercept ~~ 0*f_ins0_quadratic
  f_glu0_intercept ~~ 0*f_ins0_cubic
  f_glu0_intercept ~~ 0*f_ins0_quartic
  f_glu0_intercept ~~ 0*f_ins1_intercept
  f_glu0_intercept ~~ 0*f_ins1_linear
  f_glu0_intercept ~~ 0*f_ins1_quadratic
  f_glu0_intercept ~~ 0*f_ins1_cubic
  f_glu0_intercept ~~ 0*f_ins1_quartic
  f_glu0_intercept ~~ 0*f_ins2_intercept
  f_glu0_intercept ~~ 0*f_ins2_linear
  f_glu0_intercept ~~ 0*f_ins2_quadratic
  f_glu0_intercept ~~ 0*f_ins2_cubic
  f_glu0_intercept ~~ 0*f_ins2_quartic
  f_glu0_linear ~~ 0*f_ins0_intercept
  f_glu0_linear ~~ 0*f_ins0_linear
  f_glu0_linear ~~ 0*f_ins0_quadratic
  f_glu0_linear ~~ 0*f_ins0_cubic
  f_glu0_linear ~~ 0*f_ins0_quartic
  f_glu0_linear ~~ 0*f_ins1_intercept
  f_glu0_linear ~~ 0*f_ins1_linear
  f_glu0_linear ~~ 0*f_ins1_quadratic
  f_glu0_linear ~~ 0*f_ins1_cubic
  f_glu0_linear ~~ 0*f_ins1_quartic
  f_glu0_linear ~~ 0*f_ins2_intercept
  f_glu0_linear ~~ 0*f_ins2_linear
  f_glu0_linear ~~ 0*f_ins2_quadratic
  f_glu0_linear ~~ 0*f_ins2_cubic
  f_glu0_linear ~~ 0*f_ins2_quartic
  f_glu0_quadratic ~~ 0*f_ins0_intercept
  f_glu0_quadratic ~~ 0*f_ins0_linear
  f_glu0_quadratic ~~ 0*f_ins0_quadratic
  f_glu0_quadratic ~~ 0*f_ins0_cubic
  f_glu0_quadratic ~~ 0*f_ins0_quartic
  f_glu0_quadratic ~~ 0*f_ins1_intercept
  f_glu0_quadratic ~~ 0*f_ins1_linear
  f_glu0_quadratic ~~ 0*f_ins1_quadratic
  f_glu0_quadratic ~~ 0*f_ins1_cubic
  f_glu0_quadratic ~~ 0*f_ins1_quartic
  f_glu0_quadratic ~~ 0*f_ins2_intercept
  f_glu0_quadratic ~~ 0*f_ins2_linear
  f_glu0_quadratic ~~ 0*f_ins2_quadratic
  f_glu0_quadratic ~~ 0*f_ins2_cubic
  f_glu0_quadratic ~~ 0*f_ins2_quartic
  f_glu0_cubic ~~ 0*f_ins0_intercept
  f_glu0_cubic ~~ 0*f_ins0_linear
  f_glu0_cubic ~~ 0*f_ins0_quadratic
  f_glu0_cubic ~~ 0*f_ins0_cubic
  f_glu0_cubic ~~ 0*f_ins0_quartic
  f_glu0_cubic ~~ 0*f_ins1_intercept
  f_glu0_cubic ~~ 0*f_ins1_linear
  f_glu0_cubic ~~ 0*f_ins1_quadratic
  f_glu0_cubic ~~ 0*f_ins1_cubic
  f_glu0_cubic ~~ 0*f_ins1_quartic
  f_glu0_cubic ~~ 0*f_ins2_intercept
  f_glu0_cubic ~~ 0*f_ins2_linear
  f_glu0_cubic ~~ 0*f_ins2_quadratic
  f_glu0_cubic ~~ 0*f_ins2_cubic
  f_glu0_cubic ~~ 0*f_ins2_quartic
  f_glu0_quartic ~~ 0*f_ins0_intercept
  f_glu0_quartic ~~ 0*f_ins0_linear
  f_glu0_quartic ~~ 0*f_ins0_quadratic
  f_glu0_quartic ~~ 0*f_ins0_cubic
  f_glu0_quartic ~~ 0*f_ins0_quartic
  f_glu0_quartic ~~ 0*f_ins1_intercept
  f_glu0_quartic ~~ 0*f_ins1_linear
  f_glu0_quartic ~~ 0*f_ins1_quadratic
  f_glu0_quartic ~~ 0*f_ins1_cubic
  f_glu0_quartic ~~ 0*f_ins1_quartic
  f_glu0_quartic ~~ 0*f_ins2_intercept
  f_glu0_quartic ~~ 0*f_ins2_linear
  f_glu0_quartic ~~ 0*f_ins2_quadratic
  f_glu0_quartic ~~ 0*f_ins2_cubic
  f_glu0_quartic ~~ 0*f_ins2_quartic
  f_glu1_intercept ~~ 0*f_ins0_intercept
  f_glu1_intercept ~~ 0*f_ins0_linear
  f_glu1_intercept ~~ 0*f_ins0_quadratic
  f_glu1_intercept ~~ 0*f_ins0_cubic
  f_glu1_intercept ~~ 0*f_ins0_quartic
  f_glu1_intercept ~~ 0*f_ins1_intercept
  f_glu1_intercept ~~ 0*f_ins1_linear
  f_glu1_intercept ~~ 0*f_ins1_quadratic
  f_glu1_intercept ~~ 0*f_ins1_cubic
  f_glu1_intercept ~~ 0*f_ins1_quartic
  f_glu1_intercept ~~ 0*f_ins2_intercept
  f_glu1_intercept ~~ 0*f_ins2_linear
  f_glu1_intercept ~~ 0*f_ins2_quadratic
  f_glu1_intercept ~~ 0*f_ins2_cubic
  f_glu1_intercept ~~ 0*f_ins2_quartic
  f_glu1_linear ~~ 0*f_ins0_intercept
  f_glu1_linear ~~ 0*f_ins0_linear
  f_glu1_linear ~~ 0*f_ins0_quadratic
  f_glu1_linear ~~ 0*f_ins0_cubic
  f_glu1_linear ~~ 0*f_ins0_quartic
  f_glu1_linear ~~ 0*f_ins1_intercept
  f_glu1_linear ~~ 0*f_ins1_linear
  f_glu1_linear ~~ 0*f_ins1_quadratic
  f_glu1_linear ~~ 0*f_ins1_cubic
  f_glu1_linear ~~ 0*f_ins1_quartic
  f_glu1_linear ~~ 0*f_ins2_intercept
  f_glu1_linear ~~ 0*f_ins2_linear
  f_glu1_linear ~~ 0*f_ins2_quadratic
  f_glu1_linear ~~ 0*f_ins2_cubic
  f_glu1_linear ~~ 0*f_ins2_quartic
  f_glu1_quadratic ~~ 0*f_ins0_intercept
  f_glu1_quadratic ~~ 0*f_ins0_linear
  f_glu1_quadratic ~~ 0*f_ins0_quadratic
  f_glu1_quadratic ~~ 0*f_ins0_cubic
  f_glu1_quadratic ~~ 0*f_ins0_quartic
  f_glu1_quadratic ~~ 0*f_ins1_intercept
  f_glu1_quadratic ~~ 0*f_ins1_linear
  f_glu1_quadratic ~~ 0*f_ins1_quadratic
  f_glu1_quadratic ~~ 0*f_ins1_cubic
  f_glu1_quadratic ~~ 0*f_ins1_quartic
  f_glu1_quadratic ~~ 0*f_ins2_intercept
  f_glu1_quadratic ~~ 0*f_ins2_linear
  f_glu1_quadratic ~~ 0*f_ins2_quadratic
  f_glu1_quadratic ~~ 0*f_ins2_cubic
  f_glu1_quadratic ~~ 0*f_ins2_quartic
  f_glu1_cubic ~~ 0*f_ins0_intercept
  f_glu1_cubic ~~ 0*f_ins0_linear
  f_glu1_cubic ~~ 0*f_ins0_quadratic
  f_glu1_cubic ~~ 0*f_ins0_cubic
  f_glu1_cubic ~~ 0*f_ins0_quartic
  f_glu1_cubic ~~ 0*f_ins1_intercept
  f_glu1_cubic ~~ 0*f_ins1_linear
  f_glu1_cubic ~~ 0*f_ins1_quadratic
  f_glu1_cubic ~~ 0*f_ins1_cubic
  f_glu1_cubic ~~ 0*f_ins1_quartic
  f_glu1_cubic ~~ 0*f_ins2_intercept
  f_glu1_cubic ~~ 0*f_ins2_linear
  f_glu1_cubic ~~ 0*f_ins2_quadratic
  f_glu1_cubic ~~ 0*f_ins2_cubic
  f_glu1_cubic ~~ 0*f_ins2_quartic
  f_glu1_quartic ~~ 0*f_ins0_intercept
  f_glu1_quartic ~~ 0*f_ins0_linear
  f_glu1_quartic ~~ 0*f_ins0_quadratic
  f_glu1_quartic ~~ 0*f_ins0_cubic
  f_glu1_quartic ~~ 0*f_ins0_quartic
  f_glu1_quartic ~~ 0*f_ins1_intercept
  f_glu1_quartic ~~ 0*f_ins1_linear
  f_glu1_quartic ~~ 0*f_ins1_quadratic
  f_glu1_quartic ~~ 0*f_ins1_cubic
  f_glu1_quartic ~~ 0*f_ins1_quartic
  f_glu1_quartic ~~ 0*f_ins2_intercept
  f_glu1_quartic ~~ 0*f_ins2_linear
  f_glu1_quartic ~~ 0*f_ins2_quadratic
  f_glu1_quartic ~~ 0*f_ins2_cubic
  f_glu1_quartic ~~ 0*f_ins2_quartic
  f_glu2_intercept ~~ 0*f_ins0_intercept
  f_glu2_intercept ~~ 0*f_ins0_linear
  f_glu2_intercept ~~ 0*f_ins0_quadratic
  f_glu2_intercept ~~ 0*f_ins0_cubic
  f_glu2_intercept ~~ 0*f_ins0_quartic
  f_glu2_intercept ~~ 0*f_ins1_intercept
  f_glu2_intercept ~~ 0*f_ins1_linear
  f_glu2_intercept ~~ 0*f_ins1_quadratic
  f_glu2_intercept ~~ 0*f_ins1_cubic
  f_glu2_intercept ~~ 0*f_ins1_quartic
  f_glu2_intercept ~~ 0*f_ins2_intercept
  f_glu2_intercept ~~ 0*f_ins2_linear
  f_glu2_intercept ~~ 0*f_ins2_quadratic
  f_glu2_intercept ~~ 0*f_ins2_cubic
  f_glu2_intercept ~~ 0*f_ins2_quartic
  f_glu2_linear ~~ 0*f_ins0_intercept
  f_glu2_linear ~~ 0*f_ins0_linear
  f_glu2_linear ~~ 0*f_ins0_quadratic
  f_glu2_linear ~~ 0*f_ins0_cubic
  f_glu2_linear ~~ 0*f_ins0_quartic
  f_glu2_linear ~~ 0*f_ins1_intercept
  f_glu2_linear ~~ 0*f_ins1_linear
  f_glu2_linear ~~ 0*f_ins1_quadratic
  f_glu2_linear ~~ 0*f_ins1_cubic
  f_glu2_linear ~~ 0*f_ins1_quartic
  f_glu2_linear ~~ 0*f_ins2_intercept
  f_glu2_linear ~~ 0*f_ins2_linear
  f_glu2_linear ~~ 0*f_ins2_quadratic
  f_glu2_linear ~~ 0*f_ins2_cubic
  f_glu2_linear ~~ 0*f_ins2_quartic
  f_glu2_quadratic ~~ 0*f_ins0_intercept
  f_glu2_quadratic ~~ 0*f_ins0_linear
  f_glu2_quadratic ~~ 0*f_ins0_quadratic
  f_glu2_quadratic ~~ 0*f_ins0_cubic
  f_glu2_quadratic ~~ 0*f_ins0_quartic
  f_glu2_quadratic ~~ 0*f_ins1_intercept
  f_glu2_quadratic ~~ 0*f_ins1_linear
  f_glu2_quadratic ~~ 0*f_ins1_quadratic
  f_glu2_quadratic ~~ 0*f_ins1_cubic
  f_glu2_quadratic ~~ 0*f_ins1_quartic
  f_glu2_quadratic ~~ 0*f_ins2_intercept
  f_glu2_quadratic ~~ 0*f_ins2_linear
  f_glu2_quadratic ~~ 0*f_ins2_quadratic
  f_glu2_quadratic ~~ 0*f_ins2_cubic
  f_glu2_quadratic ~~ 0*f_ins2_quartic
  f_glu2_cubic ~~ 0*f_ins0_intercept
  f_glu2_cubic ~~ 0*f_ins0_linear
  f_glu2_cubic ~~ 0*f_ins0_quadratic
  f_glu2_cubic ~~ 0*f_ins0_cubic
  f_glu2_cubic ~~ 0*f_ins0_quartic
  f_glu2_cubic ~~ 0*f_ins1_intercept
  f_glu2_cubic ~~ 0*f_ins1_linear
  f_glu2_cubic ~~ 0*f_ins1_quadratic
  f_glu2_cubic ~~ 0*f_ins1_cubic
  f_glu2_cubic ~~ 0*f_ins1_quartic
  f_glu2_cubic ~~ 0*f_ins2_intercept
  f_glu2_cubic ~~ 0*f_ins2_linear
  f_glu2_cubic ~~ 0*f_ins2_quadratic
  f_glu2_cubic ~~ 0*f_ins2_cubic
  f_glu2_cubic ~~ 0*f_ins2_quartic
  f_glu2_quartic ~~ 0*f_ins0_intercept
  f_glu2_quartic ~~ 0*f_ins0_linear
  f_glu2_quartic ~~ 0*f_ins0_quadratic
  f_glu2_quartic ~~ 0*f_ins0_cubic
  f_glu2_quartic ~~ 0*f_ins0_quartic
  f_glu2_quartic ~~ 0*f_ins1_intercept
  f_glu2_quartic ~~ 0*f_ins1_linear
  f_glu2_quartic ~~ 0*f_ins1_quadratic
  f_glu2_quartic ~~ 0*f_ins1_cubic
  f_glu2_quartic ~~ 0*f_ins1_quartic
  f_glu2_quartic ~~ 0*f_ins2_intercept
  f_glu2_quartic ~~ 0*f_ins2_linear
  f_glu2_quartic ~~ 0*f_ins2_quadratic
  f_glu2_quartic ~~ 0*f_ins2_cubic
  f_glu2_quartic ~~ 0*f_ins2_quartic

# === Indicator Covariances: Residual correlations at same time points ===

# Baseline (31wk)
  GLU00_0 ~~ INS00_0
  GLU0_0 ~~ INS0_0
  GLU30_0 ~~ INS30_0
  GLU60_0 ~~ INS60_0
  GLU90_0 ~~ INS90_0
  GLU120_0 ~~ INS120_0

# Post-intervention (36wk)
  GLU00_1 ~~ INS00_1
  GLU0_1 ~~ INS0_1
  GLU30_1 ~~ INS30_1
  GLU60_1 ~~ INS60_1
  GLU90_1 ~~ INS90_1
  GLU120_1 ~~ INS120_1

# Postpartum (2mo)
  GLU00_2 ~~ INS00_2
  GLU0_2 ~~ INS0_2
  GLU30_2 ~~ INS30_2
  GLU60_2 ~~ INS60_2
  GLU90_2 ~~ INS90_2
  GLU120_2 ~~ INS120_2

Ed Merkle

unread,
Dec 22, 2025, 5:47:19 PM12/22/25
to Marcus Waldman, blavaan
Wow, I think this is the longest model (b)lavaan model specification I have ever seen! Here are some ideas:

- It is not documented well, but you can use the argument target = "cmdstan" if you have cmdstan installed on your system. cmdstan has improvements over rstan, but you will have to wait for the model to compile before it runs.

- If you haven't already, I would consider informative priors. I have seen situations where things go wrong when, say, a latent variance draw is really close to 0. Then other parameters potentially become unidentified, then the draws get extreme and the chain can't recover. I would think to put more information in the variance/standard deviation parameters, say gamma(5,5)[sd] or something else depending on your data, then maybe beta(10,10) for rho. (this model includes a warning about "theta covariance matrix is neither diagonal nor unrestricted", so I believe it can't find unrestricted blocks here) Some more info about priors is here:


- Consider using more of the lavaan arguments to fix things to 0, as opposed to putting it all in the model specification. Sometimes (though not always) blavaan can make use of those arguments to improve the model estimation. Looking at your model, you might be able to use:

int.ov.free = FALSE
int.lv.free = TRUE
auto.cov.lv.x = FALSE
auto.cov.y = FALSE

- Remove the residual correlations and see whether you still experience the exception. That would help narrow down the problem.


Ed
Reply all
Reply to author
Forward
0 new messages