Moderation using lavaan package in R

15,831 views
Skip to first unread message

Shubam Sharma

unread,
Nov 30, 2016, 8:49:28 PM11/30/16
to lavaan

I am writing the model syntax for my model to run in lavaan package using R. I have a factor created (which is my independent variable), 4 dependent variables (I will test 4 different models for each separate DV), and I have four moderating variables that I want to include in each model. All variables except the factor are observed variables.

I am having difficulty with how to include the moderating variable effects in the syntax. Would it look something like this below? Also, how can I have variables like age and sex as covariates in all models? Thank you for your guidance.

Y - DV; F1 - factor; X1 - X4: Moderating variables 1 through 4

Y ~ F1 + F1*X1 + F1*X2 + F1*X3 + F1*X4

Terrence Jorgensen

unread,
Dec 1, 2016, 6:21:28 AM12/1/16
to lavaan

Y ~ F1 + F1*X1 + F1*X2 + F1*X3 + F1*X4


No, the "*" operator is not for defining multiplicative terms.  The lavaan model syntax is not a formula object:

?formula
?lavaanify

What your "F1*X1" does is create a label "F1" for the slope of X1.  You can use the ":" operator to define multiplicative terms

F1 + X1 + F1:X1

But this only applies to observed variables because latent variables are unobserved, so there is no value to be multiplied.  To define an interaction effect of a latent variable, you need to use product among indicators from the two interacting variables to define a latent interaction factor.  The semTools package provides a convenience function for creating product terms:

?indProd

But you should read the articles referenced on that help page to learn more about how to build a model like that.  It involves many nuanced considerations.


how can I have variables like age and sex as covariates in all models? 


Just add them as predictors on the right-hand side of the regression equation.

Y ~ F1 + X1 + age + sex


Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Shubam Sharma

unread,
Dec 1, 2016, 4:04:16 PM12/1/16
to lavaan
Hello Dr. Jorgensen,

Thank you for your helpful and prompt reply to my questions. I greatly appreciate your assistance and am very grateful for this active learning community.

I have installed the semTools package and read through the articles referenced on the indProd help page. To create the latent interaction factor, I am trying code as follows:

interaction <- indProd(dataset, var1=1:4, var2=5:8)

where 1:4 are the four variables that form factor1 (the unobserved latent factor) and 5:8 are the four observed moderator variables. When I try this, I am receiving an error message which is saying that 'x' must be numeric. I have defined my dataset and am using the name which I assigned it in the 'x' argument position. I know that there are some factor vectors in the dataset, and am wondering if simply converting these to numeric vectors would solve this issue. I tried this:

data$scode[data$sex=="Male"] <- 1
data$scode[data$sex=="Female"] <- 2

And this is not working. I am a little stuck here and any assistance would be sincerely appreciated. I apologize for my novice understanding with lavaan and semTools packages. These are very new to me and I am trying my best to learn.

Thank you so much.

Shubam Sharma

unread,
Dec 1, 2016, 10:10:10 PM12/1/16
to lavaan
You can disregard my previous question as I've looked into it further. Here are my new inquiries:

First of all, what specifically does the indProd function return as a result? Will it give me the interaction term I am trying to create between the latent factor and each moderator variable?

How could I use indProd to obtain each interaction (such as F1:X1)? My data set is consists of 14 variables (4 of which make up the latent factor, 4 DVs - trying to build a separate model for each DV, and 4 moderators).

interaction1 <- indProd(dataset, var1=9:12, var2=1, match = FALSE, meanC = TRUE, residualC = FALSE, doubleMC = TRUE, namesProd = NULL)

where 9:12 are the variables that form my latent factor and 1 is the first moderator variable. When I do this, instead of it showing the var1*var2 interaction (F1:X1), it gives me the interactions of each specific variable forming the latent factor with the moderator (9*var2, 10*var2, 11*var2, and 12*var2, instead of var1*var2). How can I get around this?

Thank you again for your assistance. I greatly appreciate it. 

On Thursday, December 1, 2016 at 6:21:28 AM UTC-5, Terrence Jorgensen wrote:

Terrence Jorgensen

unread,
Dec 2, 2016, 9:57:50 AM12/2/16
to lavaan
First of all, what specifically does the indProd function return as a result? Will it give me the interaction term I am trying to create between the latent factor and each moderator variable?

As the top of the help page indicates, indProd() returns the products of the observed indicators.  As I said in my previous post, you cannot calculate products with a latent variable because you haven't observed it.  You use the product indicators as indicators of a latent variable that represents the interaction term, as described in the articles cited on the help page.  There is no avoiding the necessity of reading about the method before you try to apply it. 

Shubam Sharma

unread,
Dec 2, 2016, 12:29:48 PM12/2/16
to lavaan
Thank you for this clarification. As this is my first time building a model with a latent variable interaction, the conceptual considerations are slowly making sense. When I read through the articles referenced on the help page yesterday, I still felt like I needed more clarification; it is not that I did not read them and I apologize that it came off in that way. Article 1 encourages the unconstrained approach in which I see the value as it is robust against any potential violations of the multivariate normality assumptions. I know that I cannot use the matched-pair strategy, which would be generally easier, because I do not have equal amount of indicators in my latent factor and observed variables. However, unconstrained and mean-centering approaching can give me more peace of mind as I run my analyses that I am not violating major normality assumptions. I now understand what indProd does conceptually - using the product of the mean-centered indicators to create the indicators of latent interaction; .then I can utilize this term in each of my models I assume. So, in my above post, what I can do is use 

9*var2*10*var2*11*var2*12*var2 and in this way, I will have the latent product term that has accounted for the interaction between the indicators of the original latent factor and the original observed variable.

Any affirmation or non-affirmation is greatly appreciated. Thank you again for taking the time to assist with this and remaining patient with my non-expertise in this area. 

Alex Schoemann

unread,
Dec 2, 2016, 1:36:43 PM12/2/16
to lavaan
Hi Shubam,

You're most of the way there. I think the missing piece is the construction of a latent variable representing the interaction (not another product term). Each of the product terms returned from the indProd function should be used as indicators of a latent variable in your model. The path from this latent variable to your outcome variables provides a test of the interaction. With the double mean centering approach you need to be sure that your interaction latent variable is allowed to covary with the other predictor latent variables. 

Alex

Shubam Sharma

unread,
Dec 4, 2016, 1:39:33 PM12/4/16
to lavaan
Hello Dr. Schoemann,

Thank you so much for your assistance with this. I can't tell you how much I appreciate it! I was wondering if you could provide me with some guidance regarding a new error I am getting. I have explained below:

I ran the following command (9*var2*10*var2*11*var2*12*var2) and stored the output as prodInd1 to account for the interaction of my latent factor and observed variables. I did this 3 more times for all 4 observed moderators (output stored in variables entitled prodInd1, prodInd2, prodInd3, and prodInd4).

I then defined my model as:

model1 <- '
## Regressions
Y ~ Factor + prodInd1 + prodInd2 + prodInd3 + prodInd4 + Age + Sex

## Latent variable definitions
Factor =~ x1 + x2 + x3 + x4

## Covariances between moderators
x5 ~~ x6
x7 ~~ x8
'

After that I ran the following sem function:

fit <- sem(model1, data=data)

I encountered the following error:

Error in complete.cases(data[all.idx]) : 
  invalid 'type' (list) of argument

Is there any way to work around this error? Thank you so much again.

Best wishes,

Shubam

Shubam Sharma

unread,
Dec 4, 2016, 2:26:16 PM12/4/16
to lavaan
Forgot one additional piece of information.

Here is the indProd function I ran:

interaction1 <- indProd(mys2, var1=9:12, var2=1, match = FALSE, meanC = TRUE, residualC = FALSE, doubleMC = TRUE, namesProd = NULL)

I did this all 4 times where var2=each of my moderating variables. I then multiplied the output columns (9*var2 * 10*var2 * 11*var2 * 12*var2) and those respective outputs were stored in prodInd1, prodInd2, prodInd3, prodInd4.

Terrence Jorgensen

unread,
Dec 5, 2016, 5:40:50 AM12/5/16
to lavaan
Y ~ Factor + prodInd1 + prodInd2 + prodInd3 + prodInd4 + Age + Sex

No, you need to use the product-indicators as indicators of the interaction factor.

## Latent variable definitions
Factor =~ x1 + x2 + x3 + x4
Interaction =~ prodInd1 + prodInd2 + prodInd3 + prodInd4

## residual correlations between items and their product terms
x1 ~~ prodInd1
x2 ~~ prodInd2
x3 ~~ prodInd3
x4 ~~ prodInd4

## Regressions
Y ~ Factor + Interaction + Age + Sex

I can't tell whether prodInd1-4 are interactions with Sex or with Age, but you need to do the same with both.  Likewise, you need to allow the residuals of the Sex-indicator products correlate with the residuals of the Age-indicator products.  Try giving them more meaningful names so you can tell them apart.

Terrence Jorgensen

unread,
Dec 5, 2016, 5:43:20 AM12/5/16
to lavaan
I then multiplied the output columns (9*var2 * 10*var2 * 11*var2 * 12*var2) and those respective outputs were stored in prodInd1, prodInd2, prodInd3, prodInd4.

Why are you multiplying the indicators times 9, 10, 11, 12, and each other 4 times (i.e., raising the variable to the 4th power)?  The indProd() function already returns the product terms, you don't need to make more products from the product terms.

franz...@gmail.com

unread,
Aug 2, 2018, 4:25:11 AM8/2/18
to lavaan
Hello,

I have a question related to that:

- does it make sense to orthogonalize when I have an observed predictor and an observed moderator, and both of them lack normality (as a consequence I assume they are not bivarite normally distributed
either - therefore, I guess from Little 2006 that orthogonalizing works best)?

- does it make sense to do the same for an observed predictor and a latent moderator with 3 items?

- Oh and by the way @Subam: how did you solve the problem "When I try this, I am receiving an error message which is saying that 'x' must be numeric. I have defined my dataset and am using the name which I assigned it in the 'x' argument position."?

thank you in advance!

Franzi

Terrence Jorgensen

unread,
Aug 2, 2018, 5:16:05 AM8/2/18
to lavaan
- does it make sense to orthogonalize when I have an observed predictor and an observed moderator, and both of them lack normality (as a consequence I assume they are not bivarite normally distributed
either - therefore, I guess from Little 2006 that orthogonalizing works best)?

No, absolutely not.  It does nothing but invalidate the interpretations of lower-order effects.


Residual-centering (or even better, double-mean-centering, especially when variables are not normal) are only ad hoc methods used for interactions involving latent variables, for which we cannot actually calculate the product terms used to represent interaction effects.  When you have observed predictors, you should use their simple product terms in the model.  In lavaan, you can simply indicate a product term should be included using the colon operator:

syntax <- ' y ~ x + z + x:z '

- does it make sense to do the same for an observed predictor and a latent moderator with 3 items?

Yes, you can calculate the product between the observed predictor and each of the moderator's indicators, and I would recommend double-mean-centering.

FYI, Bayesian SEM allows these products to be calculated between latent variables, which are drawn as parameters from the posterior.  This negates the need for all of this clumsy centering to fit an ad hoc model that is not actually a data-generating model (i.e., an interaction factor is not actually a separate factor that affects indicators, which are also not random variables themselves but products of other random variables in the model).  If you use the blavaan package to specify a lavaan model without the latent interaction term, you have the option to save the JAGS or Stan script that fits the Bayesian model, which you can edit to add the latent interaction by including the product in the formula (and adding that slope to your list of parameters).

- Oh and by the way @Subam: how did you solve the problem "When I try this, I am receiving an error message which is saying that 'x' must be numeric. I have defined my dataset and am using the name which I assigned it in the 'x' argument position."?

Observed exogenous categorical variables must be represented using numeric dummy codes.

franz...@gmail.com

unread,
Aug 2, 2018, 1:10:17 PM8/2/18
to lavaan

Thank You, Terrence!

- does it make sense to orthogonalize when I have an observed predictor and an observed moderator, and both of them lack normality (as a consequence I assume they are not bivarite normally distributed
either - therefore, I guess from Little 2006 that orthogonalizing works best)?

No, absolutely not.  It does nothing but invalidate the interpretations of lower-order effects.


But regarding multicollinearity - doesn't it make sense to set the covariance between X and X:M, resp., M and X:M to 0? (I hope it is not written in the article you linked because I do not have access to it:/)

franz...@gmail.com

unread,
Aug 3, 2018, 3:59:52 AM8/3/18
to lavaan


syntax <- ' y ~ x + z + x:z '
 
In my case, there is also another problem with this method because want to fit the model with a lavaan-survey-object with cluster-robust standard errors and bootstrapping. Unfortunately, R does not seem to like this combination. Therefore, I fitted your method and the manually calculated products to a normal dataframe to compare the results. Here, of course, the fit indices change due to different dfs (I guess) and the SEs are bit lower for your method...

Are you aware of a more elegant way of combining moderation with cluster-adjusted SEs?

Terrence Jorgensen

unread,
Aug 3, 2018, 8:10:54 AM8/3/18
to lavaan
But regarding multicollinearity - doesn't it make sense to set the covariance between X and X:M, resp., M and X:M to 0?

X:M is supposed to be multicollinear -- it is a product of X and M, so of course it is related to them.  That is not a problem because linear regression does not assume the predictors are uncorrelated, so the parameter estimates are unbiased.  The "problem" of (multi)collinearity is that the SEs increase as predictors become more correlated.  Again, that is supposed to happen, because predictors that share variance can't use the shared variance independently to predict the outcome, so sample-fluctuations result in more sampling variance of predictors' effects.  When different predictors are highly correlated, it makes sense to prioritize which to include as a predictor in the model because it makes no sense to try to use redundant variables to do the same job.  But with interaction terms, the multicollinearity is, in a sense, there by design, and it is the reason why we generally have less power to detect higher-order effects like interactions than to detect lower-order simple effects.  This happens in ANOVA, too, and when groups are unbalanced, the problem is worse because the grouping variables are not independent (uncorrelated) variables.

Now, if you are working with observed predictors, then mean-centering is often recommended (e.g., by Cohen, Cohen, Aiken, & West in their regression textbook) to minimize the correlation of X:M with its parents.  Mean-centering might be useful in general, so that the intercept and the simple effects of each predictor are interpretable effects (holding the other variable constant at its mean), but it does not really solve the "problem" of multicollinearity:


Residual-centering is, in my opinion, worse because it does not enhance but rather obscures the interpretation of simple effects, and it hides rather than solve the "problem" of multicollinearity.  If you compare uncentered predictors to mean-centered predictors and to residual-centering the product term, you will see that the estimate(d size) of the interaction effect is identical across the methods, as is the SE of the effect (and thus the t test and p value).  So there is no advantage to centering with respect to power or interpretation of the interaction term.

N <- 50
set.seed(123)
foo
<- data.frame(x = rnorm(N, mean = 3, sd = 1),
                  m
= rnorm(N, mean = 3, sd = 1))
foo$y
<- 0 + 6*foo$x + 2*foo$m - 2*foo$x*foo$m + rnorm(N)

## no centering
mod
<- lm(y ~ x*m, data = foo)
summary
(mod)

## mean-centering
foo$x
.mc <- foo$x - mean(foo$x)
foo$m
.mc <- foo$m - mean(foo$m)
mod
.mc <- lm(y ~ x.mc*m.mc, data = foo)
summary
(mod.mc)

## residual-centering
foo$xm
<- foo$x*foo$m
mod
.xm <- lm(xm ~ x + m, data = foo)
foo$xm
.rc <- resid(mod.xm)
mod
.rc <- lm(y ~ x + m + xm.rc, data = foo)
summary
(mod.rc)

## Notice the un- and mean-centered models make identical predictions.
## Only the x-axis changes.
library
(rockchalk)
plotSlopes
(mod, plotx = "x", modx = "m")
plotSlopes
(mod.mc, plotx = "x.mc", modx = "m.mc")

Mean-centering provides estimates of simple effects of each predictor holding the other predictor constant at its mean, and the intercept is the mean outcome when both predictors are at their means.  But residual-centering yields does not yield estimates of the simple effects at all, but the predictors effects IF THERE WERE NO INTERACTION in the model.  Not only is that not the case, but the residual-centering model yields smaller SEs for the simple effects of X and M than the main-effects-only model does, so residual-centering can actually inflate the Type I error rates for those effects!

## main-effects only
mod
.main <- lm(y ~ x + m, data = foo)
summary
(mod.main)
summary
(mod.rc) # smaller SEs


Again, though, if you are estimating an interaction involving a latent variable, then you have little recourse than to use some kind of centering for the product indicators just to get the model to work (unless you use LMS in Mplus or the R package nlsem).  As the indProd() function in semTools uses for its default, I recommend using double-mean-centering, for reasons outlined in the article referenced on the ?indProd help page.

Lin, G. C., Wen, Z., Marsh, H. W., & Lin, H. S. (2010). Structural equation models of latent interactions: Clarification of orthogonalizing and double-mean-centering strategies. Structural Equation Modeling, 17(3), 374–391. doi:10.1080/10705511.2010.488999

Terrence Jorgensen

unread,
Aug 3, 2018, 8:12:31 AM8/3/18
to lavaan
want to fit the model with a lavaan-survey-object with cluster-robust standard errors and bootstrapping.

If you have robust SEs, why do you additionally need bootstrapping? 

Are you aware of a more elegant way of combining moderation with cluster-adjusted SEs?

Can't you simply fit the models and get robust SEs?  I am unaware of an issue here.
Message has been deleted

franz...@gmail.com

unread,
Aug 5, 2018, 6:00:57 AM8/5/18
to lavaan
Thank you for the detailed answer.

The thing is that, besides a couple of latent as well as continuos moderators, I  have multiple predictors (2 observed continuous variables), mediators (4 latent variables), and outcomes (2 latent variables) in the model. Additionally, the participants of the study are "nested" in teams by nature. As I have ruled out MSEM due to missing hypotheses on the team level, I wanted to adjust the SEs to the design using lavaan survey. Unfortunately, my data are also not normally distributed and I have some missings. So my ideal, would be to have:
- a cluster-robust estimator which is also robust to nonnormality like WLS or MLR
- bootstrapping to assess the effects of multiple nonnormal mediatiors properly BUT reading this now and having done something else yesterday, I can see that propbably it is not necessary anymore when I use a cluster-robust estimator
- a way for coping with the missingness, either by an appropriate estimator which would be WLS because MLR needs complete data
So far my thoughts about the multi-mediation (please feel free to comment), and I guess this might work for the moderation too.
But double-mean centering the latent moderators would be necessary, right?

The problem is that I cannot use WLS without getting warnings. I can only fit the model with MLM to a lavaan survey-object of this kind (des2.boot):

des2 <- svydesign(ids=~Vkst, probs =~ 1, data = fin.team)
des2.boot <- as.svrepdesign(des2, type = "bootstrap", replicates = 5000)

Otherwise, I receive warnings. Are the results still trustworthy in this case?

franz...@gmail.com

unread,
Aug 5, 2018, 7:56:03 AM8/5/18
to lavaan
From an MPlus-perspective, I also thought of MLR as estimator but then I cannot connect it to the design...

Sorry for this confused posting:/

Terrence Jorgensen

unread,
Aug 7, 2018, 8:00:20 AM8/7/18
to lavaan
I wanted to adjust the SEs to the design using lavaan survey. Unfortunately, my data are also not normally distributed and I have some missings

Not necessary anymore.  lavaan 0.6-2 provides cluster-robust SEs in tandem with FIML for missing (continuous) data.

- bootstrapping to assess the effects of multiple nonnormal mediatiors properly BUT reading this now and having done something else yesterday, I can see that propbably it is not necessary anymore when I use a cluster-robust estimator

It should not be.  The sampling distributions of products of parameters are asymptotically normal, they just require larger N than individual parameters to be "asymptotic".  So the delta method is probably sufficient if you have N > 200. 


MLR needs complete data

Wrong.  MLM (Satorra-Bentler correction) and MLR (Yuan-Bentler correction) are asymptotically equivalent, but only MLM requires complete data.

But double-mean centering the latent moderators would be necessary, right?

Are you talking about using product-indicators of latent interactions?  Yes, double-mean-centering is most advantageous.

The problem is that I cannot use WLS without getting warnings. I can only fit the model with MLM to a lavaan survey-object of this kind (des2.boot):

WLS is not available with lavaan.survey, because the survey package does not have the functionality needed to handle categorical indicators.

franz...@gmail.com

unread,
Aug 9, 2018, 2:57:36 AM8/9/18
to lavaan
Thank you so much! I tried including cluster = "team" in the sem-function!


Not necessary anymore.  lavaan 0.6-2 provides cluster-robust SEs in tandem with FIML for missing (continuous) data.

Yes! I definitely like that new feature, and the fit measures as well as the estimates I recieve do look great BUT the output is also accompanied by this little friend:

Warning message:
In lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats,  :
  lavaan WARNING:
    The variance-covariance matrix of the estimated parameters (vcov)
    does not appear to be positive definite! The smallest eigenvalue
    (= -3.451325e-17) is smaller than zero. This may be a symptom that
    the model is not identified.
 Sounds weird but how much do I have to care about this? Is it possible that this is due to unequal cluster sizes? I get the same one when I run a CFA on the included variables only....

- bootstrapping to assess the effects of multiple nonnormal mediatiors properly BUT reading this now and having done something else yesterday, I can see that propbably it is not necessary anymore when I use a cluster-robust estimator

It should not be.  The sampling distributions of products of parameters are asymptotically normal, they just require larger N than individual parameters to be "asymptotic".  So the delta method is probably sufficient if you have N > 200. 


Thank you! My sample size is N=529 (with FIML) and only complete patterns N=314.



franz...@gmail.com

unread,
Aug 9, 2018, 3:21:53 AM8/9/18
to lavaan
And when I request the eigenvalues, I get the following (which does not look negative at all):

> eigen(inspect(gran1.tot, "cov.lv"))$values
[1] 2.5662824 0.6648205 0.3996176 0.3141911 0.2514984 0.1061096

Terrence Jorgensen

unread,
Aug 9, 2018, 3:34:30 AM8/9/18
to lavaan
And when I request the eigenvalues, I get the following (which does not look negative at all):

> eigen(inspect(gran1.tot, "cov.lv"))$values
[1] 2.5662824 0.6648205 0.3996176 0.3141911 0.2514984 0.1061096

That is the model-implied covariance matrix (of the variables), not the covariance matrix of the estimated parameters (vcov).

eigen(vcov(gran1.tot))

franz...@gmail.com

unread,
Aug 9, 2018, 3:46:57 AM8/9/18
to lavaan
Thank you!
Ok now I can see it but what can I do about it? I do not have this problem when I ignore the clusters. So what does that mean (also contentwise)? Are the clusters not meaningful and should be ignored therefore?

Terrence Jorgensen

unread,
Aug 9, 2018, 3:53:32 AM8/9/18
to lavaan
Ok now I can see it but what can I do about it? I do not have this problem when I ignore the clusters. So what does that mean (also contentwise)? Are the clusters not meaningful and should be ignored therefore?

No, it is just a warning that the model might not be identified.  Is there any other evidence it is not identified?  It runs without taking clustering into account, so I wonder whether this is likely to occur when you have some small clusters, or not many clusters... I don't have a lot of experience with cluster-robust SEs. Posting on CrossValidated or SEMNET might solicit a response from someone with more experience in this area (unless Stas is watching this thread already...).

franz...@gmail.com

unread,
Aug 9, 2018, 4:01:44 AM8/9/18
to lavaan
Well, yes that was what I thought too: Í have 50 clusters and their size varies between 7 and 32 people. So maybe that is why. Apart from that the fit measures do look great (even when I include that cluster-command...), and also the parameter estimates point in a similar direction as when I do not include the cluster-argument....

franz...@gmail.com

unread,
Aug 10, 2018, 1:47:46 AM8/10/18
to lavaan
For everyone who is following: I got the suggestion that it might be due perfect correlations between some variables or no variance on the variables within some clusters.
Of course, in small clusters it is more likely to happen (especially, if only one person completes the second measurement).
In my case, I found lot of variables to be perfectly correlated with each other. Omitting them or the groups where this happens would mean a bias in the sample and a completely new model
As I do have a big sample, the problem is not apparent if cluster-structure is ignored.So I will go for this alkthough it is less sophisticated...

m.mar...@campus.unimib.it

unread,
Sep 5, 2018, 9:29:53 AM9/5/18
to lavaan
Hello Dr. Jorgensen, 
I have a question related to some moderated path analysis I am testing using sem function in R. I wish to probe the moderation effect, observing the slopes of the predictor at different levels of the moderator. I couldn't find so far a function that will allow me to do that. I specify that I have only observed variables and that I am testing simultaneously two moderators. 
This is the code:

q<-'SocialWithdrawal~ZExclusion+ZAcc_it+ZAcc_c+ZExclusion:ZAcc_it+ZExclusion:ZAcc_c'
q1<-sem(q, data = Esclusione_sociale_migranti)
summary(q1, fit.measures=TRUE, rsq=TRUE, standardized=TRUE)

The analysis showed that the only significative moderation is: "ZExclusion:ZAcc_it"; how can I run the slope analysis of this fitted model?
I tried with the "probe2WayRC" function but later I realized it works only for latent interaction. 

Do you have any suggestion?
Thank you very much.
Best, 
Marco
Reply all
Reply to author
Forward
0 new messages