Skip to first unread message

Sep 16, 2018, 3:04:16 PM9/16/18

to lavaan

Dear Lavaan Group

I would like to test 4 different latent variables as mediator in a model structure which is shown in the figure in the attachment. So in total, I am testing four models which all have the same structure, but with different latent variables as mediators (the other variables stay the same).

The 4 different latent variables which are hypothesized to act as mediator are self-esteem, optimism, anxiety and depression, respectively. (supressed indicator variables in the figure in the attachment)

When testing each model, I followed the instructions given on the Lavaan webpage (http://lavaan.ugent.be/tutorial/mediation.html). For example, the code for self-esteem (SE) as mediator looks like this:

sem_med_se <- "

# Latent constructs

SE =~ t4_belief_qualities_trait + t4_belief_useless_trait + t4_belief_useful_trait + t4_belief_positive_trait

SCIM =~ t4_scim_location

# Fix Residual Variances of single Indicator Latent Variables

t4_scim_location ~~ 23.1844*t4_scim_location

# Regression

# Direct effect

SCIM ~ c1*t4_decubitus + c2*t4_urin_infection + c3*t4_pulmonary + c4*t4_cardiac_function + c5*t4_bowel_care + c6*t4_pain+ age_sci_group + sex + t4_partet + t4_compl

# Mediation

SE ~ a1*t4_decubitus + a2*t4_urin_infection + a3*t4_pulmonary + a4*t4_cardiac_function + a5*t4_bowel_care

+ a6*t4_pain

SCIM ~ b*SE

# Indirect effects (a*b)

a1b := a1*b

a2b := a2*b

a3b := a3*b

a4b := a4*b

a5b := a5*b

a6b := a6*b

# Total effects

total1 := c1 + (a1*b)

total2 := c2 + (a2*b)

total3 := c3 + (a3*b)

total4 := c4 + (a4*b)

total5 := c5 + (a5*b)

total6 := c6 + (a6*b)

"

sem_fit_med_se <- sem(sem_med_se, estimator = "WLSMV", data = data_imputed)

summary(sem_fit_med_se, standardized = TRUE, fit.measures = TRUE, rsquare = TRUE)

Now, I noticed that the **“total effects” part in the results** is very strange: For all four
models, the estimates for the respective total effects total1 to total6 are
exactly **equivalent**. Does anyone have an idea why this is
happening? Did I specify something wrong?

Thank you very much in advance for ideas and feedback!

Best regards, jsabel

Sep 16, 2018, 3:45:47 PM9/16/18

to lav...@googlegroups.com

What result do you get if you omit the mediators entirely? How do the "total effects" of the 6 predictors in that situation compare with these mediation results? The correlation of A and B is the correlation of A and B, never mind what C is *partially* interposed. But if you restricted the direct path,then you might well see differences.

--

You received this message because you are subscribed to the Google Groups "lavaan" group.

To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.

To post to this group, send email to lav...@googlegroups.com.

Visit this group at https://groups.google.com/group/lavaan.

For more options, visit https://groups.google.com/d/optout.

Sep 21, 2018, 5:59:49 AM9/21/18

to lavaan

Dear Edward

Thank you very much for your input! I am still a bit confused and worried:

When I omit the mediators entirely, I get for regression part exactly the same results like for the total effects in the above described mediator models.

I noticed that the model with omitted mediators is a simple regression. Therefore, it seems that the total effects in the non-omitted mediator models are the results of this regression.

Can this be correct?/Does this refers to your command about the correlation of A and B is independent of wat is partially interposed?

And what is then the interpretation of these results? (Does this impose that there is no effect of the respective mediators on the outcome or that they all have the same effect?)

I was a bit searching on the web this week and found the following
topic on the lavaan google group: __https://groups.google.com/forum/#!topic/lavaan/EWqFQO3FZds__

Has the setting conditional.x = TRUE something to do with my “strange” results (regressing out effects of covariates first)?

Regarding your last command: Which direct path shall I restrict to see differences?

By the way: Do you know how to perform a “Sobel-Goodman Mediation test” to get the proportions of the mediated effects compared to the direct ones?

Thanks again and best wishes,

isabel

Oct 2, 2018, 3:22:14 AM10/2/18

to lavaan

Does someone have an idea regardig my problem? I still dont see if these results make sense or not, nor how to interpret them.

Thanks for any considerations!

Best, isabel

Oct 3, 2018, 6:04:53 PM10/3/18

to lavaan

Hello Isa,

I want to try to troubleshoot your problem. I have two questions 1) Why are you using estimator = "WLSMV" instead of ML, and 2) I see you data object's name has the the word "imputed", does it mean that your data was imputed to handle the missing values?

I just want to understand your data better before jumping into conclusions.

Oct 11, 2018, 3:20:41 AM10/11/18

to lavaan

Hi Esteban

thanks for your respond! Regarding your questions:

I am using the WLSMV estimator since most of my observed variables are binary or ordered categorical (also indicated in the model structure picture). Even when I try to use another estimator, lavaan automatically switches to the WLSMV, since it is the default estimator for ordered variables. (That’s at least what I understood)

Yes, I imputed missing values in my data first before I do the SEM.

Do you think there is something wrong with my settings?

Best, isabel

Oct 11, 2018, 6:23:28 AM10/11/18

to lavaan

Now, I noticed that the

“total effects” part in the resultsis very strange: For all four models, the estimates for the respective total effects total1 to total6 are exactlyequivalent. Does anyone have an idea why this is happening? Did I specify something wrong?

The syntax looks fine. I suppose if all your a1-a6 paths are sufficiently similar, and your c1-c6 paths are all sufficiently small, then total1-total6 could be the same in the 3rd decimal place. Could you post the output of summary(fit, nd = 6)?

Terrence D. Jorgensen

Postdoctoral Researcher, Methods and Statistics

Research Institute for Child Development and Education, the University of Amsterdam

Oct 11, 2018, 8:27:11 AM10/11/18

to lavaan

Dear Terence

Please find attached the output for your suggested command (only
the estimates for the total effects for each of the four models to keep it
short). It seems that the estimates start to differ at the 4th^{ }
decimal place. Do you think this is reasonable and has nothing to do with my
settings? (I found this post on the internet that the lavaan default setting is
regressing out covariates first and was not sure if this has something to do
with my results: __https://groups.google.com/forum/#!topic/lavaan/EWqFQO3FZds__)

Best wishes, isabel

Oct 11, 2018, 8:53:47 AM10/11/18

to lavaan

sry, I just posted a summary sheet containing the estimates of the total effects of a slightly adapted model of the one shown in my first post including two more mediation paths.

Here are the estimates of the total effects of the original model described in my first post.

It still seems that the estimtes start to differ around the 4th decimal place. But, what still is a bit striking is that the standard errors for the estimates still seem to be equal for all the six total effects.

Oct 18, 2018, 3:48:13 AM10/18/18

to lavaan

Please find attached the output for your suggested command (only the estimates for the total effects for each of the four models to keep it short).

The reason I asked you to post the summary() was to check the plausibility of the explanation I proposed above:

if all your a1-a6 paths are sufficiently similar, and your c1-c6 paths are all sufficiently small

Oct 18, 2018, 4:32:26 AM10/18/18

to lavaan

Ah sorry, now I see! You also need to see the paths to check the plausability of your assumption.

So, please find again attached the summary (now the full one, but only for the Self Esteem model, the others are very similar with regard to the magnitude of the regression estimates)

Please tell me if you would like to see the summaries of all four models.

Regarding the posted summary it seems that your explanation is plausible. The only thing that I am not sure about is, if it is a problem that still all estimates for the standard errors are the same up to the 6th decimal place. (can be seen in the short summary I posted last week)

Oct 18, 2018, 8:04:35 AM10/18/18

to lavaan

For all four models, the estimates for the respective total effects total1 to total6 are exactlyequivalent.

I don't see this in your output. All of your estimated total effects ( in the Estimate column) are very different, ranging from -1 to -10. Even your SEs differ in the first decimal place, as well as your standardized effects.

Oct 18, 2018, 8:48:11 AM10/18/18

to lavaan

The problem is that the respective total effects1-6 are the same** across** the 4 Models I am testing (the model structure always stays the same, but I am testing a different Mediator variable).

I am very sorry, maybe the information I posted was just not enough. Let me show you again the model structure (attachement 1). With the help of this structure, I am testing 4 Models: I am inserting 4 different Mediator Variables: Self Esteem, Optimism, Anxiety and Depression.

The attachements 2 to 5 show the summaries of the 4 models. There, one can see that the estimates for the total effects1-6 are equivalent up to the 3rd decimal across the 4 models. But, they start to differ around the 4th decimal, that is what I didnt notice until you came up with the idea to print more digits of the estimates. Still, the estimates for the standard errors of the total effects1-6 stay equvalent across the 4 models.

I hope I described the problem a bit better now.

Thank you anyway for your help!

Best wishes

Oct 18, 2018, 8:51:30 AM10/18/18

to lavaan

The problem is that the respective total effects1-6 are the sameacrossthe 4 Models I am testing (the model structure always stays the same, but I am testing a different Mediator variable).

oh, well then they should absolutely be identical (within reasonable rounding/estimation error) because regardless of how you decompose the total effect into different indirect effects, those should always sum to the same total effect (i.e., the direct effect in a model without any mediator(s) at all).

Oct 18, 2018, 9:44:35 AM10/18/18

to lavaan

okay, perfect :) This helps me a lot because it means that there is nothing wrong with my models! Thank you very much, Terrence! And sorry again for my imprecise communication.

By the way, I am currently reading about Multiplicity Adjustments (multiple testing / p-value adjustments) in Structural Equation Modelling. Do you have any experiences in this? I found it very hard to find literature about this topic.

In my case with the four mediation models I would argue that Multiplicity Adjustment is not necessary on the Level of Model Fit Indices, since I am testing 4 different families of tests (= 4 Models with different Mediators respectively).

But I am unsure if I should adjust the p-values of the Regression estimates for each model. Since I am testing Mediations, would this be reasonable? (People usually do not adjust any estimates in SEM)

Thanks again very much!

Isabel

Oct 20, 2018, 4:55:08 PM10/20/18

to lavaan

(People usually do not adjust any estimates in SEM)

True, but they same issue holds: the more tests you conduct, they more opportunity there is to commit at least one Type I error. How you define a family of tests is arbitrary, it is simply a matter of being able to say "in this set of tests, there is only an alpha% chance that I committed even one Type I error if all those null hypotheses were true." You can submit a set of *p* values to R's p.adjust() function to use whatever adjustment you like, including controlling the false discovery rate (which provides more powerful tests, and doesn't make the silly assumption that all null hypotheses are true).

Oct 24, 2018, 8:27:51 AM10/24/18

to lavaan

Thank you very much for your response, Terrence!

I think I will at least adjust the p-values for the direct and indirect paths and the total effects estimates concerning my six mediations. So I would like to test these three families of tests (direct, indirect, total).

Is there a way in lavaan to directly head for the p-values of certain regression estimates or do I need to give the p-values to the the p.adjust() function manually?

Best regards and thanks a lot!

Isa

Oct 26, 2018, 4:55:23 AM10/26/18

to lavaan

do I need to give the p-values to the the p.adjust() function manually?

I think this is the easiest approach, since your *p* values come from different models (so they won't be a subset of rows from parameterEstimates() output). To keep things clear, you can label the *p* values in your vectors.

indirect <- c(mod1 = .001, mod2 = .038, mod3 = .767)

direct <- c(mod1 = .023, mod2 = .253, mod3 = .005)

p.adjust(indirect, method = "bonferroni") # controls Type I error rate

p.adjust(direct, method = "fdr") # controls false-discovery rate, more powerful

Terrence D. Jorgensen

Assistant Professor, Methods and Statistics

Nov 23, 2018, 11:25:26 AM11/23/18

to lavaan

Dear Terrence,

thanks for your answer! I would like to ask you a follow up question to this:

As I already described earlier, I am testing four models which differ by their imposed mediator (self-esteem, optimism, anxiety and depression), only. Within each model I am testing 6 mediation paths (indirect paths through the respective mediator). I attached again the picture of my model structure.

Your answer suggests that you would adjust the p-values across the four Models for each indirect path (and similarly for the direct paths):

*indirect_decubitus
<- c(indirect_decubitus_model1 =
0.001, indirect_decubitus_model2 = 0.26, indirect_decubitus_model3 = 0.9,
indirect_decubitus_model4 = 0.05)*

*p.adjust(indirect_decubitus, method = “fdr”)*

*indirect_urin
<- c(indirect_urin_model1 = 0.041, indirect_urin_model2 = 0.026, indirect_urin_model3
= 0.29, indirect_urin_model4 = 0.025)*

*p.adjust(indirect_urin, method = “fdr”)*

*…*

*indirect_pain
<- c(indirect_pain_model1 = 0.01, indirect_pain_model2 = 0.226, indirect_pain_model3
= 0.093, indirect_pain_model4 = 0.5)*

*p.adjust(indirect_pain, method = “fdr”)*

Is
such a comparison of p-values **across **different
models valid/appropriate? (In the light of the fact that my models have
different variables as mediators and are therefore not the same in terms of
included variables)

One
of my ideas was also to adjust **within**
each model for the six mediation paths (and similarly for the direct paths):

*indirect_model1
<- c(indirect_decubitus = 0.011,
indirect_urin = 0.265, indirect_pulmonary = 0.093, indirect_cardiac = 0.005 ,
indirect_bowel = 0.48, indirect_pain = 0.011)*

*p.adjust(indirect_model1, method = “fdr”)*

*indirect_model2
<- c(indirect_decubitus = 0.051,
indirect_urin = 0.236, indirect_pulmonary = 0.79, indirect_cardiac = 0.045 ,
indirect_bowel = 0.7, indirect_pain = 0.02)*

*p.adjust(indirect_model2, method = “fdr”)*

…

* indirect_model4 <- c(indirect_decubitus = 0.01, indirect_urin
= 0.026, indirect_pulmonary = 0.94, indirect_cardiac = 0.075 , indirect_bowel =
0.417, indirect_pain = 0.12)*

*p.adjust(indirect_model4, method = “fdr”)*

As I already mentioned, my question is if both of the adjustment approaches are valid and if there is any guidance/reasoning that would favour one over the other?

Thanks a lot for your continuous support and have a nice evening!

Best wishes, Isa

Nov 29, 2018, 5:03:40 AM11/29/18

to lavaan

Is such a comparison of p-valuesacrossdifferent models valid/appropriate? One of my ideas was also to adjustwithineach model for the six mediation paths (and similarly for the direct paths):

within/between models has nothing to do with "appropriate". The more tests you conduct, the more opportunities there are to make at least one Type I error. That applies to all the tests you will conduct over your whole career. You obviously don't try to limit the chance of a single Type I error to 5% across your whole career, nor would anyone reasonably expect you to do so for all the tests on a single sample of data (i.e., experimentwise alpha level). Familywise alpha level is just a flexible way for you to say "Out of this particular set of tests, the chance I made at least one Type I error is [alpha level]." You don't have to waste a lot of space or energy to defend your choice, just clearly state which set of hypotheses you consider a "family."

my question is if both of the adjustment approaches are valid and if there is any guidance/reasoning that would favour one over the other?

Yes, it would be difficult for anyone to argue that someone else's definition of a "family" of hypotheses is invalid, as long as there is any kind of connection between them.

Since you are controlling the false discovery rate (FDR) instead of the Type I error rate, the perspective is a little different. Instead of assuming all your null hypotheses are true and trying to protect yourself from rejecting them, so you just accept that some of your rejected hypotheses will be Type I errors, and you want to make sure that no more than [alpha level] percent of your rejected hypotheses are errors (no need to assume any of your null hypotheses were true at all). Personally, from this perspective, I am more inclined to put all my *p* values into one bucket to control FDR.

Nov 29, 2018, 6:28:02 AM11/29/18

to lavaan

Thanks a lot for your response, Terrence!

Since you are controlling the false discovery rate (FDR) instead of the Type I error rate, the perspective is a little different. Instead of assuming all your null hypotheses are true and trying to protect yourself from rejecting them, so you just accept that some of your rejected hypotheses will be Type I errors, and you want to make sure that no more than [alpha level] percent of your rejected hypotheses are errors (no need to assume any of your null hypotheses were true at all). Personally, from this perspective, I am more inclined to put all mypvalues into one bucket to control FDR.

What do you mean by putting all p values into one bucket? Do you mean to controll all p-values from a whole model for fdr? Or do you mean any of my "across" or "within" approaches stated above?

And by the way: Since I am calculating 4 models, should I also consider to control the p-values of their Chi-square tests for model fit (or any other fit index)? I am not sure if I should do this since for the p value of the Chi-square this would even help me to "improve" the model fit.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu