# of succesful bootstraps = 0 when I increase the total number of bootstraps?

Skip to first unread message

Adriaan Van Liempt

Sep 28, 2020, 8:32:34 AM9/28/20
to lavaan
A colleague pointed me towards the following thread https://groups.google.com/g/lavaan/c/JO5pr0fj5TY/m/AZw6722VKbwJ for a problem I had (it was similar to the one posted in 2014 I refer to): in order to test for mediation, I had to bootstrap the SE's. I was also using 'ordered' data and, for similar reasons, was using the WLSMV estimator. Using the WLSMV estimator, however, was/is not possible when using bootstrapping.

The post offered a suggestion, but I run into the problem that roughly 1 in 10 draws the result fails to converge. Now, this is not necessarily my problem, but lavaan seems to stop the process well before the number of draws I ask to be drawn. So far i have managed to reach 291 draws.

Do I have options? I would very much like to be able to bootstrap the SE's using the WLSMV estimator. The above suggestion of adding test="scaled.shifted", seems a step in the right direction, but lavaan simply stops (I assume it is related to these unsuccessful draws?). Is there, however a way to 'force' lavaan to continue? I'd prefer to have at least 1000 successful draws.

Further information:
I am using lavaan 0.6-7 and I ran the following command:
MedAppCtrlBoot2.fit <- sem(MedAppCtrl.sem,
                          se = "bootstrap",
                          test = "scaled.shifted",
                          bootstrap = 5000,
                          parallel = "multicore",
                          estimator = "DWLS",
                          verbose= TRUE
summary(MedAppCtrlBoot2.fit, fit.measures=TRUE, standardized=TRUE)
parameterEstimates(MedAppCtrlBoot2.fit, standardized=TRUE)

PS. I have also tried the above without using parallel processing. No difference. I just hoped to speed up matters.

Thank you very much for any help.

Alex Schoemann

Sep 28, 2020, 10:40:39 AM9/28/20
to lavaan
One option might be to use a different method of testing the indirect effect. A Monte Carlo confidence interval (see http://quantpsy.org/pubs/preacher_selig_2012.pdf for details) will allow you to accurately and powerfully test the indirect effect without bootstrapping. If you're interested in this approach the monteCarloMed function in semTools works with lavaan objects to create Monte Carlo confidence intervals for any combination of parameters (I'm not 100% sure it works with WLS, but if not please let me know and we'll fix that). 

It's impossible to know exactly what the issue is with bootstrapping here, but if I had to guess I'd say it's likely due to either missing data (resulting in so much missing data for a bootstrap sample the model isn't estimable), or some categories with small frequencies (so in some bootstrap samples a variable is a constant).


Adriaan Van Liempt

Sep 28, 2020, 10:57:43 AM9/28/20
to lav...@googlegroups.com
There are no missing values, but indeed, the latter, categories with small frequencies, is possible. I will read your suggested paper and see if I manage to get this working. Hopefully later this week. Thank you very much for your answer!

PS. I noticed that I did not update my command syntax above. I omitted the ordered=  parameter.

You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/swK2RSFAGjY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/b64f97bc-447f-447c-bd86-a3074e72069fn%40googlegroups.com.

Stas Kolenikov

Sep 28, 2020, 12:10:07 PM9/28/20
to lav...@googlegroups.com
I would not trust standard errors if you are getting 291 usable replicates out of 5,000. Something is very bad with your combination of the model, the data, and the estimator. If you have 0/1 variables where only 3% of observations are positive/1 and 97% are negative/0, then your data are so skewed that the asymptotic assumptions underlying both the ADF as the estimation method and the bootstrap as the concept really need tens of thousands of cases for it to work well.

-- Stas Kolenikov, PhD, PStat (ASA, SSC)  @StatStas
-- Principal Scientist, Abt Associates @AbtDataScience
-- Opinions stated in this email are mine only, and do not reflect the position of my employer
-- http://stas.kolenikov.name

You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/CAHXC5j7tOESK3%3D8OPW6gxzCrR27YcZY%3DC7A9wQXVu48TkrcxQg%40mail.gmail.com.


Sep 28, 2020, 3:47:19 PM9/28/20
to lavaan
Yes, that would indeed increase my doubts as well. However, perhaps I have not explained the situation well enough. The process stopped at 291 draws. I did not mean to imply that only 291/5000 draws were successful. Roughly 1 in 10 draws fails to converge. I simply never reach the 5000 draws. 291 was the highest number of draws I have seen before the bootstrap process stopped. When I look at the summary of the fit indices, I get to see 0/5000 successful draws. So even the ones that were successful, are simply not counted.


Sep 28, 2020, 3:54:32 PM9/28/20
to lav...@googlegroups.com

Well, if it is due to the implemented procedure, which I don't know, use the boot package. There you have full control. If the bootstraps are useful is of course another question.
Am 28.09.20, 21:47 schrieb "adri...@gmail.com" <adri...@gmail.com>:

Yves Rosseel

Sep 29, 2020, 3:37:52 AM9/29/20
to lav...@googlegroups.com
Would you be able to send me your data and a short R script? I would
like to investigate this further.



Sep 29, 2020, 4:31:32 PM9/29/20
to lavaan
Thank you Yves, the new beta version of lavaan solved the problem!

Reply all
Reply to author
0 new messages