I will try to explain what *might* have happened, based on my limited understanding. Other members please correct me if I am wrong.
1) Are the CIs bootstrap CIs?
We can rule out the possibility that the CIs are normal theory CIs based on SE by checking whether the CIs are symmetric about the point estimates.
parameterEstimates(fit, boot.ci.type = "perc")
#> lhs op rhs label est se z pvalue ci.lower ci.upper
#> 1 Y ~ X c 0.497 0.092 5.398 0 0.342 0.730
#> 2 M ~ X a 0.990 0.015 65.506 0 0.957 1.017
#> 3 Y ~ M b 0.497 0.097 5.144 0 0.259 0.672
#> 4 Y ~~ Y 0.015 0.003 4.618 0 0.008 0.020
#> 5 M ~~ M 0.020 0.004 4.904 0 0.012 0.026
#> 6 X ~~ X 0.980 0.000 NA NA 0.980 0.980
#> 7 ab := a*b ab 0.493 NA NA NA 0.257 0.655
#> 8 total := c+(a*b) total 0.990 NA NA NA 0.963 1.020
If the CIs are based on SE, then they must be symmetric. That is, their distances from the point estimates (est) are the same. If they are not and you set se = "bootstrap" and ask for
boot.ci, they should be bootstrap CIs.
For my example:
est <- parameterEstimates(fit, boot.ci.type = "perc")
abs(est$ci.lower - est$est)
#> [1] 0.155503344 0.033346802 0.238254959 0.006786193 0.007784402 0.000000000
#> [7] 0.235291385 0.027343440
abs(est$ci.upper - est$est)
#> [1] 0.232163895 0.027023086 0.174776320 0.005492289 0.006801941 0.000000000
#> [7] 0.162719272 0.029812472
They are not symmetric about the point estimates. They are bootstrap CIs, as expected.
If I did not ask for bootstrapping, this is the result:
fit <- sem(model, data = Data)
parameterEstimates(fit, boot.ci.type = "perc")
#> lhs op rhs label est se z pvalue ci.lower ci.upper
#> 1 Y ~ X c 0.497 0.123 4.055 0 0.257 0.738
#> 2 M ~ X a 0.990 0.020 49.624 0 0.951 1.029
#> 3 Y ~ M b 0.497 0.123 4.055 0 0.257 0.738
#> 4 Y ~~ Y 0.015 0.003 5.000 0 0.009 0.020
#> 5 M ~~ M 0.020 0.004 5.000 0 0.012 0.027
#> 6 X ~~ X 0.980 0.000 NA NA 0.980 0.980
#> 7 ab := a*b ab 0.493 0.122 4.042 0 0.254 0.731
#> 8 total := c+(a*b) total 0.990 0.020 49.624 0 0.951 1.029
est <- parameterEstimates(fit, boot.ci.type = "perc")
abs(est$ci.lower - est$est)
#> [1] 0.240446325 0.039101159 0.240446325 0.005752642 0.007644644 0.000000000
#> [7] 0.238835344 0.039101159
abs(est$ci.upper - est$est)
#> [1] 0.240446325 0.039101159 0.240446325 0.005752642 0.007644644 0.000000000
#> [7] 0.238835344 0.039101159
They are not symmetric about the point estimates. They are not bootstrap CIs.
2) Why NAs for SE and z but bootstrap CIs available?
My guess is, when computing bootstrap SEs for user-defined parameters, NAs are not removed. Therefore, if NAs are present in any of the bootstrap samples, it returns NAs for SEs and hence also for z and pvalue.
However, NAs are removed when forming percentile bootstrap CIs. Percentile bootstrap CI does not need SE and so it can return percentile bootstrap CIs even when SEs are NAs.
This usually is not an issue. If we prefer percentile bootstrap CIs to z-based CI, as for indirect effects, then it means that we allow for asymmetry in the sampling distribution. The SEs and p-values are not so useful. Actually, in the case of indirect effects, the p-values based on bootstrap SEs should not be interpreted, in my opinion, if we prefer using percentile (or BCa) bootstrap CIs.
3) How to check whether NAs in SEs are really due to NAs in bootstrap samples?
We can extract the bootstrap estimates using lavInspect and ask for "boot":
fit_boot <- lavInspect(fit, "boot")
Two attributes, "error.idx" and "nonadmissible", are relevant.
"error.idx" stores the index of bootstrap samples the encountered error for whatever reasons.
In my example, estimates are NAs in one of the bootstrap samples (the 12th bootstrap sample)
(error.idx <- attr(fit_boot, "error.idx"))
#> [1] 12
(nonad <- attr(fit_boot, "nonadmissible"))
#> integer(0)
These are the estimates in the 10th to 14th bootstrap samples:
fit_boot[sort(c(error.idx - 1:2, error.idx, error.idx + 1:2)), ]
#> c a b Y~~Y M~~M
#> [1,] 0.6305563 0.9740943 0.3755701 0.01131400 0.02023379
#> [2,] 0.4621861 0.9868740 0.5601131 0.01203450 0.01634239
#> [3,] NA NA NA NA NA
#> [4,] 0.3801079 0.9639834 0.6537283 0.01060975 0.01427636
#> [5,] 0.4485607 0.9834801 0.5577716 0.00878299 0.01543722
If you see NAs in SEs, you can check whether there are NAs in some bootstrap samples. If yes, then this may explain why you see NAs in the SEs.
Note that these NAs in bootstrap samples may be "natural" and "normal." Bootstrapping involves randomly sampling. Therefore, even if a model has no problem in estimation when fitting to the full sample, problems can occur in some random bootstrap samples.
4) Setting iseed when asking for bootstrapping CIs.
When doing bootstrapping, regardless of the programs used (PROCESS, lavaan, etc.), I usually set the seed for the random number generator. In lavaan, this can be done using iseed (note that iseed does not work in some previous versions, so make sure you use the latest version of lavaan):
fit <- sem(model, data = Data, se = "bootstrap", bootstrap = 5000, iseed = 9)
This allows us to produce the same set of bootstrap samples across platforms, as long as the random number generator is the same across platforms.
In your case, you can try to set the iseed to some numbers and do the analysis. If the NA problem occurs when setting iseed to a number, the same problem should occur also in other platform. On the other hand, if the NA problem does not occur for a number, then you should see no NA in other platforms if you set iseed to the same number.
Hope this helps.
-- Shu Fai