Specifying Priors for Latent Basis Growth Model Covariates

54 views
Skip to first unread message

Kjorte Harra

unread,
Aug 7, 2023, 11:50:30 AM8/7/23
to blavaan
Hello, 

I'm trying to construct a latent basis growth model with time-variant covariates. I'd like to specify priors on these covariates. However, my current output does not reflect these prior specifications correctly. Here is my current model syntax: 

BayesNonLinGCM2ridge <- '
# intercept and slope with fixed coefficients
int =~ 1*Math03 + 1*Math05 + 1*Math07 + 1*Math09 + 1*Math11 + 1*Math13 + 1*Math15 + 1*Math17
slp =~ 0*Math03 + 2*Math05 + 4*Math07 + Math09 + Math11 + Math13 + Math15 + Math17

# time-varying covariates
Math03 ~ Reading03*prior("dnorm(0,1)")
Math05 ~ Reading05*prior("dnorm(0,1)")
Math07 ~ Reading07*prior("dnorm(0,1)")
Math09 ~ Reading09*prior("dnorm(0,1)")
Math11 ~ Reading11*prior("dnorm(0,1)")
Math13 ~ Reading13*prior("dnorm(0,1)")
Math15 ~ Reading15*prior("dnorm(0,1)")
Math17 ~ Reading17*prior("dnorm(0,1)")
   
int ~ prior("dnorm(0,.1)")*1
'

Model summary results show a model that does not properly converge (even after increasing iterations) with no regression results with the N(0,1) prior I specified above: 

** WARNING ** blavaan (0.4-7) did NOT converge after 12000 adapt+burnin iterations
** WARNING ** Proceed with caution

  Number of observations                            50

  Number of missing patterns                         1

  Statistic                                 MargLogLik         PPP
  Value                                       -874.188       0.737

Latent Variables:
                   Estimate  Post.SD pi.lower pi.upper     Rhat    Prior    
  int =~                                                                    
    Math03            1.000                                                  
    Math05            1.000                                                  
    Math07            1.000                                                  
    Math09            1.000                                                  
    Math11            1.000                                                  
    Math13            1.000                                                  
    Math15            1.000                                                  
    Math17            1.000                                                  
  slp =~                                                                    
    Math03            0.000                                                  
    Math05            2.000                                                  
    Math07            4.000                                                  
    Math09            2.241    0.779    0.833    3.853    1.920 dnorm(0,1e-2)
    Math11            2.011    1.476   -0.045    5.052    3.601 dnorm(0,1e-2)
    Math13            1.180    2.723   -1.811    6.604    5.742 dnorm(0,1e-2)
    Math15            0.285    3.196   -3.061    6.615    6.681 dnorm(0,1e-2)
    Math17           -0.074    3.385   -3.640    6.773    6.570 dnorm(0,1e-2)

Covariances:
                   Estimate  Post.SD pi.lower pi.upper     Rhat    Prior    
  int ~~                                                                    
    slp              -0.008    0.027   -0.063    0.046    1.157 dwish(iden,3)

Intercepts:
                   Estimate  Post.SD pi.lower pi.upper     Rhat    Prior    
   .Math03  (Rd03)    0.146    2.236   -3.453    5.468    2.389 dnorm(0,1e-3)
   .Math05  (Rd05)    0.579    2.006   -2.775    3.774    3.447 dnorm(0,1e-3)
   .Math07  (Rd07)    1.013    3.045   -3.684    6.555    4.928 dnorm(0,1e-3)
   .Math09  (Rd09)    0.426    1.851   -2.811    3.381    3.194 dnorm(0,1e-3)
   .Math11  (Rd11)    0.015    1.441   -2.813    2.324    2.347 dnorm(0,1e-3)
   .Math13  (Rd13)   -0.687    1.263   -2.999    1.917    1.167 dnorm(0,1e-3)
   .Math15  (Rd15)   -1.130    1.677   -4.092    2.477    1.463 dnorm(0,1e-3)
   .Math17  (Rd17)   -1.292    1.926   -4.992    2.274    1.652 dnorm(0,1e-3)
    int              -0.146    2.236   -5.494    3.447    2.380   dnorm(0,.1)
    slp              -0.217    0.883   -1.779    1.223    3.390 dnorm(0,1e-2)

Variances:
                   Estimate  Post.SD pi.lower pi.upper     Rhat    Prior    
   .Math03            0.128    0.043    0.047    0.206    1.811  dgamma(1,.5)
   .Math05            0.071    0.020    0.037    0.111    1.265  dgamma(1,.5)
   .Math07            0.067    0.020    0.032    0.106    1.101  dgamma(1,.5)
   .Math09            0.050    0.012    0.029    0.074    1.028  dgamma(1,.5)
   .Math11            0.072    0.017    0.042    0.104    1.006  dgamma(1,.5)
   .Math13            0.067    0.018    0.035    0.102    1.244  dgamma(1,.5)
   .Math15            0.062    0.019    0.031    0.101    1.342  dgamma(1,.5)
   .Math17            0.092    0.032    0.042    0.159    1.555  dgamma(1,.5)
    int               0.971    0.209    0.599    1.384    1.025 dwish(iden,3)
    slp               0.029    0.006    0.018    0.041    1.000 dwish(iden,3)

I've constructed this same model without specifying priors on the covariates and it converges fine and displays regression estimates correctly like so: 

Regressions:
                   Estimate  Post.SD pi.lower pi.upper     Rhat    Prior    
  Math03 ~                                                                  
    Reading03         0.825    0.070    0.688    0.962    1.000 dnorm(0,1e-2)
  Math05 ~                                                                  
    Reading05         0.705    0.060    0.591    0.825    1.001 dnorm(0,1e-2)
  Math07 ~                                                                  
    Reading07         0.613    0.069    0.473    0.745    1.001 dnorm(0,1e-2)
  Math09 ~                                                                  
    Reading09         0.690    0.060    0.573    0.806    1.001 dnorm(0,1e-2)
  Math11 ~                                                                  
    Reading11         0.585    0.071    0.443    0.721    1.001 dnorm(0,1e-2)
  Math13 ~                                                                  
    Reading13         0.541    0.080    0.381    0.692    1.001 dnorm(0,1e-2)
  Math15 ~                                                                  
    Reading15         0.542    0.081    0.383    0.702    1.000 dnorm(0,1e-2)
  Math17 ~                                                                  
    Reading17         0.576    0.082    0.412    0.735    1.000 dnorm(0,1e-2)

How do I specify priors on these regressions and how do I ensure the results show regression estimates correctly?

Thank you!
Kjorte

Ed Merkle

unread,
Aug 7, 2023, 3:55:35 PM8/7/23
to Kjorte Harra, blavaan
Hi Kjorte,

Nice to meet you the other week. I have not tried out this code yet but suspect the issue is that the prior() thing comes after the variable name instead of before. Maybe try changing them to look like

Math03 ~ prior("dnorm(0,1)") * Reading03


Ed
--
You received this message because you are subscribed to the Google Groups "blavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.

Łukasz Stasiełowicz

unread,
Aug 8, 2023, 3:21:55 AM8/8/23
to blavaan

Hi Kjorte,

 

There are a couple of unusual things, so I’ll start with some questions:

 

1. According to your comment in the syntax, you’re using fixed coefficients for the latent growth variables (“intercept and slope with fixed coefficients”). However, it is not entirely true. Many loadings for the slope (slp) are not fixed (Math09 + Math11 + Math13 + Math15 + Math17). This strategy deviates from the usual growth models (http://ecmerkle.github.io/blavaan/reference/bgrowth.html?q=bgrowth#ref-examples)

Is there a reason for that?

 

2. Another question pertains to the priors. Some priors seem to be very narrow compared to the estimates in the output, e.g., slp (Intercept)

Prior: dnorm(0,1e-2) in other words (0,0.01)

Estimate: -0.217

 

Or int (Intercept)

Prior: dnorm(0,.1)

Estimate: -0.146 

How are the Math scores scaled? What is the lowest and highest possible score? Can you confirm that the priors are appropriate?

 

3. Are you using bgrowth or other blavaan functions to fit the model?

bgrowth is usually recommended for growth models, e.g.,

fit<-bgrowth(BayesNonLinGCM2,data=yourdata)

summary(fit)

 

Can you show the full syntax and output from a simpler model so we can rule out any problems with the initial model (without the covariates)?

 


Best,

Lukasz

Ed Merkle

unread,
Aug 8, 2023, 9:52:09 AM8/8/23
to Łukasz Stasiełowicz, blavaan
@Kjorte: I tried out a related model and can verify that you need the prior() statements before the variable name instead of after.

@Lukasz: I don't necessarily think we need the rest of the code, at least not to answer the original question. One thing is that this is target="jags" instead of the default target="stan". So a prior like dnorm(0,.01) means a precision of .01 as opposed to an SD of .01 and is more diffuse.

Ed

Kjorte Harra

unread,
Aug 8, 2023, 10:23:34 AM8/8/23
to blavaan
Hi, 

It was nice to meet you the other week too, Ed! Thanks for the responses, I can confirm placing the prior before the covariate fixed my problem. 

@Lukasz, thank you for the response as well. I am using standardized data where variables have means of 0 and variances of 1, and I am using the bgrowth() function to help clarify. 

For now, I consider this issue resolved. I will write back if I encounter further problems. 

Thanks!
Kjorte

Łukasz Stasiełowicz

unread,
Aug 9, 2023, 3:09:54 AM8/9/23
to blavaan
@Kjorte: Since you did not respond to my first question, I’ll assume that you have already thought about the modeling strategy and its potential limitations (e.g., modeling the first part of the Math trajectory as linear and assuming that only the Math scores but not the Reading scores change over time), and my question was too basic. In case I’m mistaken, the following article could be helpful:  
Hori, K., & Miyazaki, Y. (2022). Latent curve detrending for disaggregating between-person effect and within-person effect. Structural Equation Modeling. https://doi.org/10.1080/10705511.2022.2069113  

@Ed: Of course, how could I forget that there are differences between jags and stan. Thanks for pointing this out!

Kjorte Harra

unread,
Aug 9, 2023, 11:07:02 AM8/9/23
to blavaan
Hi Luaksz, 

I appreciate the follow-up and additional resources! I shall keep these in mind as I construct my models.

Thanks,
Kjorte

Reply all
Reply to author
Forward
0 new messages