Monte Carlo simulation of Bayesian Approximate Measurement Invariance (BAMI)

6 views
Skip to first unread message

Scott Colwell

unread,
Apr 8, 2026, 10:58:20 PM (10 days ago) Apr 8
to blavaan

I am running a Monte Carlo simulation of Bayesian Approximate Measurement Invariance (BAMI) in blavaan and want to confirm the correct implementation before committing to a large HPC run.

For a two-group single-factor CFA with p=6 items, my intended implementation is:

bcfa("eta =~ y1 + y2 + y3 + y4 + y5 + y6", data = data, group = "group", group.equal = c("loadings", "intercepts"), wiggle = "intercepts", wiggle.sd = 0.10, target = "stan")

Three specific questions:

  1. Is this the correct syntax to implement Muthén & Asparouhov (2013) BAMI — specifically, does wiggle = "intercepts" with wiggle.sd = 0.10 place a N(0, 0.010) prior on the intercept differences across groups, or on the absolute Group 2 intercepts?
  2. A previous Google Groups post confirmed that blavaan's wiggle is "equivalent to the sd on the difference between parameters as Mplus applies it." Can anyone confirm this equivalence holds in the current Stan target implementation, and whether it holds when Group 1 intercept posteriors are uncertain (i.e., is the centring on the posterior mean or on a draw-by-draw basis)?
  3. For detection, I plan to extract the posterior of (ν_j(G2) − ν_j(G1)) at each MCMC draw and flag items where the 95% CI excludes zero. Is there a built-in blavaan function for extracting per-item intercept difference posteriors, or should I extract the raw MCMC draws via blavInspect(fit, "mcmc") and compute differences manually

Ed Merkle

unread,
Apr 9, 2026, 4:34:36 PM (10 days ago) Apr 9
to blavaan
Scott,

About your #1 and #2: I don't think that blavaan's current implementation is exactly the same as mplus. For blavaan with target="stan", the basic idea is that one parameter is "fully" free, then the other parameters receive a prior whose mean is the fully free parameter, and whose SD is small. For your intercepts, it would be something like

intercept1_g1 ~ normal(0, 10)
intercept1_g2 ~ normal(intercept1_g1, .01)

I think the results will often be nearly the same as putting priors on differences, but I would not say it is fully equivalent.

About your #3: there is not a builtin function for differences. I would do similar to what you suggest, like do.call("rbind", blavInspect(fit, 'mcmc')) and then subtract the appropriate columns from each other.

Ed
Reply all
Reply to author
Forward
0 new messages