Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Incomplete output

128 views
Skip to first unread message

Sophie Hölscher

unread,
Jul 22, 2021, 9:08:05 AM7/22/21
to blavaan
Hi,

I am currently working with bsem in blavaan, but am receiving incomplete output...

Here is my model: FOVmodel_comp1 <- '
# Latent variables
lranx =~ m_r_anx1 + m_r_anx2 + m_r_anx3
lse_div =~ m_se_div1 + m_se_div2 + m_se_div3 + m_se_div4 + m_se_div5 + m_se_div6
lcd_stress =~ m_cdstress2 + m_cdstress3 + m_cdstress4
ljobsat =~ m_jobsat1 + m_jobsat2 + m_jobsat3
lt_rel =~ m_t_rel1 + m_t_rel2 + m_t_rel3 + m_t_rel4_r + m_t_rel5
lt_aut =~ m_t_aut1 + m_t_aut2 + m_t_aut4_r + m_t_aut5_r

# Regressions
lse_div ~ lranx + studmb_l + intfri2_l
lcd_stress ~ lranx + studmb_l + intfri2_l
ljobsat ~ lranx + lse_div + lcd_stress + studmb_l + intfri2_l
lt_rel ~ lranx + lse_div + lcd_stress + studmb_l + intfri2_l
lt_aut ~ lranx + lse_div + lcd_stress + studmb_l + intfri2_l
'
bfit_comp1 <- bsem(FOVmodel_comp1, data = FOVdata_imp,
                   dp = dpriors(nu = "normal(2.5,1)"), n.chains = 2,
                   burnin = 5000, sample = 5000)

summary(bfit_comp1)
For example, the summary command only returns the estimate, without Post.SD, pi.lower, pi.upper etc....

blavFitIndices() only returns BRMSEA, BGammaHat, abjBGammaHat and BMC, and not the BCFI or BTLI even when specified (and a null model was specified).

And the summary.bfi() does not return any output...

Is there a mistake in my model specification that could lead to these incomplete outputs? Or is it a problem with the package?

Any help is appreciated!
Thanks a lot,
Sophie

Ed Merkle

unread,
Jul 22, 2021, 10:12:20 AM7/22/21
to Sophie Hölscher, blavaan
Sophie,

I have heard some reports of summary() problems and have included some fixes in recent versions. If you have not updated blavaan recently, you might try upgrading to 0.3-17.

If you have updated blavaan and it does not work, maybe try:

summ1 <- getMethod(summary, "blavaan")
summ1(bfit_comp1)

summ2 < - getMethod(summary, "blavFitIndices")
summ2(your blavfitindices object)

If this code works when the usual summary() does not, I'd be interested to hear about it.

Thanks,
Ed
--
You received this message because you are subscribed to the Google Groups "blavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/blavaan/9ae743ac-9737-477e-867d-391223a868d0n%40googlegroups.com.

Sophie Hölscher

unread,
Jul 29, 2021, 2:04:07 AM7/29/21
to blavaan
Hi Ed,

thanks a lot for your help. I updated blavaan, reran my models and now get the full summary() output. However, I still have problems getting the complete blavFitIndices() ouput, even with the code you sent me. I am receiving a new warning message:

"Incremental fit indices were not calculated. Save equal number of draws from the posterior of both the hypothesized and null models."

I have trouble understanding what this means. For both the null and the hypothesized model I specified the same amount of chains = 2, burnin = 10000, and sample = 10000. Could you explain the warning message to me?

Thanks a lot for your time & help,
Best regards,
Sophie

Mauricio Garnier-Villarreal

unread,
Jul 29, 2021, 10:34:22 AM7/29/21
to blavaan
Sophie

This is message that for calculating the incremental fit indices to use the same number of saved samples from hypothesis and null model. This is not related to the summary issues. We set this as a requirement because we are estimating the distributions of fit indices comparing posteriors from independely run models, so it is a reasonable requiremnt for estability

If as you say the both models have the same number of saved iterations, the message shouldnt be there. Could you provide more information to figure out where the issue might be? like the full R code, summary of the data, and both models, and sessioninfo()

Because when I run this example, it works fine with the current CRAN version 0.3-17



library(blavaan)

HS.null <- '
x1 ~~ x1
x2 ~~ x2
x3 ~~ x3
x4 ~~ x4
x5 ~~ x5
x6 ~~ x6
x7 ~~ x7
x8 ~~ x8
x9 ~~ x9
'
fit_null <- bcfa(HS.null,
            data=HolzingerSwineford1939, orthogonal=T)
summary(fit_null)

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit <- bcfa(HS.model, std.lv=T,
            data=HolzingerSwineford1939)
summary(fit)
fitMeasures(fit)

bfits <- blavFitIndices(fit, baseline.model = fit_null)
summary(bfits)


Giada Venaruzzo

unread,
Feb 24, 2024, 6:29:55 AM2/24/24
to blavaan
Hi, 

I'm trying to evaluate a cfa model, but when I try to estimate the fit indices, I only get results for BRMSEA, BGammaHat, adjBGammaHat and BMc.
I tried to extract only the CFI using the following line:

 blavFitIndices(bcfa, fit.measures = "BCFI")

And this is the error message I get:

Posterior mean (EAP) of devm-based fit indices:

Error in (new("standardGeneric", .Data = function (object)  :
  No fit indices were calculated
In addition: Warning message:

39 (3.6%) p_waic estimates greater than 0.4. We recommend trying loo instead. 

What can I do?

Thanks,

Giada 

Ed Merkle

unread,
Feb 24, 2024, 4:12:30 PM2/24/24
to Giada Venaruzzo, blavaan
Giada,

You need to specify a null/baseline model in order to obtain BCFI. One example of this appears here:


Ed

Giada Venaruzzo

unread,
Feb 25, 2024, 10:43:20 AM2/25/24
to blavaan
Yes, I'm sorry, I reported here the wrong line, because in the beginning I thought it worked like Lavaan package. But I noticed later that I had saved the null model I specified under with a different name and for that reason it wasn't working.
I'm really sorry. 

Thank you very much,

Giada

Giada Venaruzzo

unread,
Feb 26, 2024, 12:59:29 PM2/26/24
to blavaan
Hello,

Thank you.
I would like to ask a question: I found that while the RMSEA improves with some changes, the BRMSEA gets worse, even if some other indexes, for example the BCFI, improve. What could possibly cause this?

Thank you in advance.

Giada

Robert Wilcom

unread,
Feb 26, 2024, 1:42:53 PM2/26/24
to Giada Venaruzzo, blavaan
I suggest directly comparing the formulas for the indices of interest if you wish to actually understand why one increased while another decreased. For example:  https://davidakenny.net/cm/fit.htm

--
You received this message because you are subscribed to the Google Groups "blavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blavaan+u...@googlegroups.com.

Ed Merkle

unread,
Feb 26, 2024, 2:28:55 PM2/26/24
to Robert Wilcom, Giada Venaruzzo, blavaan
Robert,

That link has no information about the Bayesian variants of the indices. I believe the question is specifically about the behavior of the Bayesian indices as compared to their frequentist counterparts. I am hoping that Mauricio or Terrence might respond, because they probably have the most relevant experience here.

Ed

Robert Wilcom

unread,
Feb 26, 2024, 4:30:14 PM2/26/24
to Ed Merkle, Giada Venaruzzo, blavaan
In that case, this article from Mauricio and Terrence is recommended: https://epublications.marquette.edu/cgi/viewcontent.cgi?article=1638&context=nursing_fac Appendix contains a helpful cheatsheet.

Giada Venaruzzo

unread,
Feb 27, 2024, 11:33:01 AM2/27/24
to blavaan
Thank you.
I still don't understand why it happens. 
Fitting 3 different models, I get a lower RMSEA (fitting a frequentist model) for the third model with respect of the first (model 1: 0.079, model 2: 0.076 and model 3: 0.075); instead, when I fit a bayesian CFA, I get a higher value of BRMSEA for the third model, even f it is not a big difference (model 1: 0.093, model 2: 0.093 and model 3: 0.096). 
The first model contains all of the variables available, while model 2 and 3 are specified removing some variables.

Thank you in advance,

Giada

Mauricio Garnier-Villarreal

unread,
Feb 28, 2024, 1:00:11 PM2/28/24
to blavaan
Hi Giada

A few auestions to see where he issue might be
- Are these the means or medians of the indices posterior distributions? And if you can compare the 2
- What is the model? How big it is? Because I have seen the BRMSEA to be more dfferet for small models, if this is the case can try to change the argument pD = "DIC" to use the DIC pD which is closer to the frequentist measure for smaller models
- can you show me the out pf of the summary() of the fit indices, and of fitMeasures() of your model?
- In general I dont recommend the RMSEA (frequentist or Bayesian) because it spreads the misfit across dfs, so it will benefit larger models


-- Mauricio

Mauricio Garnier-Villarreal

unread,
Mar 5, 2024, 10:08:28 AM3/5/24
to blavaan
Hi all

Giada wrote to my in private as she didnt want to share that much details on the data and models here (totally fine). But wanted to update here what was the issue.

In her model, the pD (effective number of parameters) for LOO and DIC differed a lot, where the LOO one was hogher and the DIC one was closer to the frequentist npar. When calculating BRMSEA, this difference in pD has a large effect, so this was causing the difference in results. Othe indices are not as sensitive to this difference

Still not clear why this large difference is happening
Reply all
Reply to author
Forward
0 new messages