Yes, it seems you have fully understood because I also implemented the alternatives you indicated. As you may be interested, let me share different methods' (dis) similarities regarding my research hypothesis testing results and also my questions regarding these.
1. I followed the method suggested by Logan et al. (2022). Accordingly, I applied CFA including all ordinal latent variables, and then extracted factor scores via the Ten Berge method. I used the function of psych::factor.scores for this because it can process the polychoric correlation resulting from the CFA.
Logan, J.A.R., Jiang, H., Helsabeck, N. et al. Should I allow my confirmatory factors to correlate during factor score extraction? Implications for the applied researcher. Qual Quant 56, 2107–2131 (2022). https://doi.org/10.1007/s11135-021-01202-x
2. I am not able to use the function SAM because one of my latent variables is second-order latent variable and the function has not supported it yet. Alternatively, I again applied CFA including all ordinal latent variables. Then, I used lavPredict (fitted.cfa.object, method = "ebm", transform = TRUE). Using latent variable scores with control variables, I proceeded to the path analysis using the sem function.
3. Before using the blavaan package, as you stated, I applied CFA with DWLS (not with ML, as my variables are ordinal), and got good fit indices. Then, I ran the same model using bcfa (burniin = 500 and sample = 1000). Even in this case, it took around 8 hours for its fitting and the resulting size of the object exceeded 1 GB so I unfortunately could not see and save the results. However, I have seen the warning that the rhat values are greater than 1 so, it needs more iterations for convergence. Now, to overcome this limitation, I subscribed to the upper version of Posit Cloud. I started to run the model 10 hours ago on this platform
(burniin = 1000 and sample = 10000), it is still now processing. I hope to get a result. Is there any free alternative here, for example, can I run the code directly on R and save the result to overcome the 1 GB memory limitation imposed by RStudio?
4. As the last alternative, I applied CFA (bcfa) separately for each latent variable as you suggested to save their factor scores for further path analysis. As the model size gets smaller, I managed to run more iterations in blavaan (e.g., up to
(burnin = 1000 and sample = 4500) within 1GB memory constraints. Almost all my models have the PPP value of zero. So, should not I consider this for poor model fit as I do not have any missing values? Moreover, in a few of my models, I again got the warning that rhat values are greater than 1 so, the model needs more iterations for convergence. How critical is this warning (do they affect latent variable scores significantly)? If I increase the number of iterations to very high values, how likely to get rid of this warning do you think? To sum up, is it worth making extra effort?
When I applied the methods 1, 2, and 4 above to test my research model, I have seen that the 2 and 4 have very similar results. The first one does not show the opposite results but does not find significant effects for the many relationships tested in the model on contrary to the 2 and 4. Do you have any comment regarding the above alternative ways regarding which one is methodologically more appropriate?
Sorry for this long post but your comments and answers will be very helpful.