This one piqued my interest, so I can't help but post a solution I worked out, although it may seem like more work than it is worth. I hope it helps.
Interaction / Moderation means the effect (slope) of A1 depends on the level of A2. You can switch the roles of A2 and A1 without loss of generality -- it just depends how you want to interpret it. The important point is that this regression equation:
ŷ = β0 + β1A1 + β2A2 + β12(A1)(A2)
can be expressed equivalently by factoring out the common term A2:
ŷ = β0 + β1A1 + (β2 + β12A1) × A2
So the slope of A2 is a function of A1 -- specifically, the slope = β2 + β12A1. You can capitalize on this parameterization to generate subsets of data for one level of A1 iteratively from min(A1) to max(A1) in some number of increments, such that the subsamples add up to N = 500. For each iteration, you only need to include A2 as a variable in your model syntax, and after data generation you can save the variable A1 as a constant for that subset of data.
Your example specifies the latent-intercept being regressed on A1 and A2 and their interaction (with intercept == 0):
i_hrb = 0 + 0.3(A1) - 0.2(A2) + 0.1(A1)(A2)
which can be expressed equivalently as:
i_hrb = 0.3(A1) + [-0.2 + 0.1(A1)] × A2
So, for example, when A1 == 1, then the equation simplies to:
i_hrb = 0.3(1) + [-0.2 + 0.1(1)] × A2 = 0.3 - 0.1(A2)
And when A1 == 2, the equation simplifies to:
i_hrb = 0.3(2) + [-0.2 + 0.1(2)] × A2 = 0.6 - 0(A2) = 0.6
Using paste(), you can loop through the different levels of A1, constructing the model syntax with appropriate values of the intercept and slope for the i_hrb equation. To keep the illustration simple, I will generate values of A1 uniformly between 1 and 10, with n = 5 at each integer-value of A1, for a total N = 50. (You can generate n = 50 for each value of A1 to get N = 500, or you can make A1 look more continuous by keeping n = 5 and using 100 more fine-grained cut-points across the range of A1, but you would probably get similar results either way -- most behavioral research relies on ordinal Likert-type data anyway).
## function to generate subset of data at fixed level of A1
getSubsample <- function(A1, N) {
theDGM <- paste0('i_hrb =~ 1*hrb1 + 1*hrb2 + 1*hrb3 + 1*hrb4
i_hrb ~ ', B0, '*1 + ', B1, '*A2
someData <- simulateData(model = theDGM, model.type = 'growth',
standardized = TRUE, sample.nobs = N)
getSubsample(A1 = 2, N = 10)
## now apply it to the range of A1 == 1:10 (with small N to illustrate)
(dataList <- mapply(getSubsample, A1 = 1:10, N = 5, SIMPLIFY = FALSE))
## now combine the list of subsamples
(myData <- do.call(rbind, dataList))
The distribution of the fixed exogenous variables A1 and A2 is not assumed to be normal (or anything else) unless you choose fixed.x = FALSE, so it doesn't really matter how A1 is distributed. If you really want A1 to be standard normal, then instead of generating subsamples uniformly across the range of A1, you can generate as many values as you would expect in each of those bins. Basically, you divide the standard normal curve into k segments/bins, calculate the corresponding cumulative probabilities for the bins, and multiply the total sample size by those probabilities to get subsample sizes in each bin.
(bins <- seq(-3, 3, length.out = nBins + 1))
(binprobs <- diff(probs)) # length(binprobs) == nBins
(binsize <- round(binprobs*500))
(newbins <- seq(-3, 3, length.out = nBins)) # length(newbins) == nBins
Now, you can apply the getSubsample function again, using the highlighted objects as arguments:
set.seed(123)
dataList <- mapply(getSubsample, A1 = newbins, N = binsize, SIMPLIFY = FALSE))
lapply(dataList, head) # check the top of each subsample
sapply(dataList, nrow) # check the subsample sizes
myData <- do.call(rbind, dataList) # stack the dataList
Figuring out how to standardize is pretty tricky when you have interaction terms (whose total variance is a nontrivial function of the joint distribution of A1 and A2). In my example above, the HRB variables had SDs around 1.4, similar to your example that yields observed variances > 2
fitted(growth(theDGM))
and a model-implied variance > 1 for the latent intercept
lavInspect(growth(theDGM), "cov.lv")
I would strongly suggest setting all population parameters manually in the syntax (especially because it doesn't make sense to standardize the latent growth factors -- you do not expect their means to be zero!). Try thinking of change in units of the actual variables you will eventually analyze, choosing effect sizes according to what would be meaningful (i.e., a change that would be considered noticeable in real life). A standardized effect size is just a real effect size scaled by the variability in the data, but that variability can depend on various aspects of the study design (e.g., the (sub)populations of interest, the measurement instruments, etc.). If you really are agnostic about what a truly (practically) significant effect would look like and what variability to expect in your data, you can always check your average standardized estimates across 10 trial replications, then adjust your population parameters by trial and error until you are in the ballpark you want. Just a suggestion.
Terry