Dear Lavaan Group
I would like to test 4 different latent variables as mediator in a model structure which is shown in the figure in the attachment. So in total, I am testing four models which all have the same structure, but with different latent variables as mediators (the other variables stay the same).
The 4 different latent variables which are hypothesized to act as mediator are self-esteem, optimism, anxiety and depression, respectively. (supressed indicator variables in the figure in the attachment)
When testing each model, I followed the instructions given on the Lavaan webpage (http://lavaan.ugent.be/tutorial/mediation.html). For example, the code for self-esteem (SE) as mediator looks like this:
sem_med_se <- "
# Latent constructs
SE =~ t4_belief_qualities_trait + t4_belief_useless_trait + t4_belief_useful_trait + t4_belief_positive_trait
SCIM =~ t4_scim_location
# Fix Residual Variances of single Indicator Latent Variables
t4_scim_location ~~ 23.1844*t4_scim_location
# Regression
# Direct effect
SCIM ~ c1*t4_decubitus + c2*t4_urin_infection + c3*t4_pulmonary + c4*t4_cardiac_function + c5*t4_bowel_care + c6*t4_pain+ age_sci_group + sex + t4_partet + t4_compl
# Mediation
SE ~ a1*t4_decubitus + a2*t4_urin_infection + a3*t4_pulmonary + a4*t4_cardiac_function + a5*t4_bowel_care
+ a6*t4_pain
SCIM ~ b*SE
# Indirect effects (a*b)
a1b := a1*b
a2b := a2*b
a3b := a3*b
a4b := a4*b
a5b := a5*b
a6b := a6*b
# Total effects
total1 := c1 + (a1*b)
total2 := c2 + (a2*b)
total3 := c3 + (a3*b)
total4 := c4 + (a4*b)
total5 := c5 + (a5*b)
total6 := c6 + (a6*b)
"
sem_fit_med_se <- sem(sem_med_se, estimator = "WLSMV", data = data_imputed)
summary(sem_fit_med_se, standardized = TRUE, fit.measures = TRUE, rsquare = TRUE)
Now, I noticed that the “total effects” part in the results is very strange: For all four models, the estimates for the respective total effects total1 to total6 are exactly equivalent. Does anyone have an idea why this is happening? Did I specify something wrong?
Thank you very much in advance for ideas and feedback!
Best regards, jsabel
--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
Dear Edward
Thank you very much for your input! I am still a bit confused and worried:
When I omit the mediators entirely, I get for regression part exactly the same results like for the total effects in the above described mediator models.
I noticed that the model with omitted mediators is a simple regression. Therefore, it seems that the total effects in the non-omitted mediator models are the results of this regression.
Can this be correct?/Does this refers to your command about the correlation of A and B is independent of wat is partially interposed?
And what is then the interpretation of these results? (Does this impose that there is no effect of the respective mediators on the outcome or that they all have the same effect?)
I was a bit searching on the web this week and found the following topic on the lavaan google group: https://groups.google.com/forum/#!topic/lavaan/EWqFQO3FZds
Has the setting conditional.x = TRUE something to do with my “strange” results (regressing out effects of covariates first)?
Regarding your last command: Which direct path shall I restrict to see differences?
By the way: Do you know how to perform a “Sobel-Goodman Mediation test” to get the proportions of the mediated effects compared to the direct ones?
Thanks again and best wishes,
isabel
Hi Esteban
thanks for your respond! Regarding your questions:
I am using the WLSMV estimator since most of my observed variables are binary or ordered categorical (also indicated in the model structure picture). Even when I try to use another estimator, lavaan automatically switches to the WLSMV, since it is the default estimator for ordered variables. (That’s at least what I understood)
Yes, I imputed missing values in my data first before I do the SEM.
Now, I noticed that the “total effects” part in the results is very strange: For all four models, the estimates for the respective total effects total1 to total6 are exactly equivalent. Does anyone have an idea why this is happening? Did I specify something wrong?
Dear Terence
Please find attached the output for your suggested command (only the estimates for the total effects for each of the four models to keep it short). It seems that the estimates start to differ at the 4th decimal place. Do you think this is reasonable and has nothing to do with my settings? (I found this post on the internet that the lavaan default setting is regressing out covariates first and was not sure if this has something to do with my results: https://groups.google.com/forum/#!topic/lavaan/EWqFQO3FZds)
Best wishes, isabel
Please find attached the output for your suggested command (only the estimates for the total effects for each of the four models to keep it short).
if all your a1-a6 paths are sufficiently similar, and your c1-c6 paths are all sufficiently small
For all four models, the estimates for the respective total effects total1 to total6 are exactly equivalent.
The problem is that the respective total effects1-6 are the same across the 4 Models I am testing (the model structure always stays the same, but I am testing a different Mediator variable).
(People usually do not adjust any estimates in SEM)
Thank you very much for your response, Terrence!
I think I will at least adjust the p-values for the direct and indirect paths and the total effects estimates concerning my six mediations. So I would like to test these three families of tests (direct, indirect, total).
Is there a way in lavaan to directly head for the p-values of certain regression estimates or do I need to give the p-values to the the p.adjust() function manually?
Best regards and thanks a lot!
Isa
do I need to give the p-values to the the p.adjust() function manually?
Dear Terrence,
thanks for your answer! I would like to ask you a follow up question to this:
As I already described earlier, I am testing four models which differ by their imposed mediator (self-esteem, optimism, anxiety and depression), only. Within each model I am testing 6 mediation paths (indirect paths through the respective mediator). I attached again the picture of my model structure.
Your answer suggests that you would adjust the p-values across the four Models for each indirect path (and similarly for the direct paths):
indirect_decubitus <- c(indirect_decubitus_model1 = 0.001, indirect_decubitus_model2 = 0.26, indirect_decubitus_model3 = 0.9, indirect_decubitus_model4 = 0.05)
p.adjust(indirect_decubitus, method = “fdr”)
indirect_urin <- c(indirect_urin_model1 = 0.041, indirect_urin_model2 = 0.026, indirect_urin_model3 = 0.29, indirect_urin_model4 = 0.025)
p.adjust(indirect_urin, method = “fdr”)
…
indirect_pain <- c(indirect_pain_model1 = 0.01, indirect_pain_model2 = 0.226, indirect_pain_model3 = 0.093, indirect_pain_model4 = 0.5)
p.adjust(indirect_pain, method = “fdr”)
One of my ideas was also to adjust within each model for the six mediation paths (and similarly for the direct paths):
indirect_model1 <- c(indirect_decubitus = 0.011, indirect_urin = 0.265, indirect_pulmonary = 0.093, indirect_cardiac = 0.005 , indirect_bowel = 0.48, indirect_pain = 0.011)
p.adjust(indirect_model1, method = “fdr”)
indirect_model2 <- c(indirect_decubitus = 0.051, indirect_urin = 0.236, indirect_pulmonary = 0.79, indirect_cardiac = 0.045 , indirect_bowel = 0.7, indirect_pain = 0.02)
p.adjust(indirect_model2, method = “fdr”)
…
indirect_model4 <- c(indirect_decubitus = 0.01, indirect_urin = 0.026, indirect_pulmonary = 0.94, indirect_cardiac = 0.075 , indirect_bowel = 0.417, indirect_pain = 0.12)
p.adjust(indirect_model4, method = “fdr”)
As I already mentioned, my question is if both of the adjustment approaches are valid and if there is any guidance/reasoning that would favour one over the other?
Thanks a lot for your continuous support and have a nice evening!
Best wishes, Isa
Is such a comparison of p-values across different models valid/appropriate? One of my ideas was also to adjust within each model for the six mediation paths (and similarly for the direct paths):
my question is if both of the adjustment approaches are valid and if there is any guidance/reasoning that would favour one over the other?
Since you are controlling the false discovery rate (FDR) instead of the Type I error rate, the perspective is a little different. Instead of assuming all your null hypotheses are true and trying to protect yourself from rejecting them, so you just accept that some of your rejected hypotheses will be Type I errors, and you want to make sure that no more than [alpha level] percent of your rejected hypotheses are errors (no need to assume any of your null hypotheses were true at all). Personally, from this perspective, I am more inclined to put all my p values into one bucket to control FDR.