Question about multilevel ordinal model using brms

1,059 views
Skip to first unread message

divas...@gmail.com

unread,
Sep 20, 2017, 11:54:24 PM9/20/17
to brms-users
Hello, everyone!

Recently, I tried to use this package to estimate parameters of a 3-level ordinal model, the first level is individual level (interviewers) including personality info (gender, age, etc.), second is census level including sub-district info (sub_sex ratio, sub_elder%, etc.) and third is district level including characteristics of districts (sex ratio, elder%, etc.). The dependent variable is an ordinal variable (from 1 to 5). The model input and output are as following:

fit1<-brm(as.ordered(score)~as.ordered(health)+as.factor(gender)+(1|subdistrict)+(1|subdistrict:district),data=comb.data1,family=cumulative("logit"), threshold="flexible")

Family: cumulative(logit) 
Formula: as.ordered(score)~as.ordered(health)+as.factor(gender)+(1|subdistrict)+(1|subdistrict:district)
         disc = 1
   Data: comb.data1 (Number of observations: 1207) 
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; 
         total post-warmup samples = 4000
    ICs: LOO = NA; WAIC = NA; R2 = NA
 
Group-Level Effects: 
~subdistrict (Number of levels: 134) 
              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept)     1.10      0.54     0.08     1.91         76 1.05

~subdistrict:district (Number of levels: 134) 
              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept)     1.17      0.54     0.09     1.92         86 1.04

Population-Level Effects: 
                    Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept[1]           -1.34      0.20    -1.72    -0.96       4000 1.00
Intercept[2]            0.32      0.19    -0.06     0.70       4000 1.00
Intercept[3]            1.27      0.20     0.88     1.64       4000 1.00
Intercept[4]            2.80      0.21     2.40     3.22       4000 1.00
as.orderedhealth.L     0.15      0.10    -0.05     0.36       4000 1.00
as.factorgender1        0.17      0.12    -0.08     0.40       4000 1.00

Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
is a crude measure of effective sample size, and Rhat is the potential 
scale reduction factor on split chains (at convergence, Rhat = 1).

But I have confronted a set of questions as below. 

1) I have tried clmm package before for 2-level ordinal model, the result will clearly tell me which variable is significant to the dependent variable with p-value. However, results of brms don't tell me that info. So could you please tell me how I can see the significance of each variable? Or how can I evaluate the significance of each variable?

2) Accordingly, all Eff.Sample values are 4000, does that mean I need to increase the number of samples when fitting the model? (I have noted that Rhat of two random intercept terms were large than 1 and require more samples. Just want to know if all Rhat =1.00 but all Eff.Sample values are 4000, do I need to increase the number of samples)

3) As I have a set of subdistrict-level variables and district-level variables, for a random intercept model, is it right to construct the formula as below:
fit1<-brm(as.ordered(score)~as.ordered(health)+as.factor(gender)+sub_sexratio+sub_elder%+sexratio+elder%+(1|subdistrict)+(1|subdistrict:district),data=comb.data1,family=cumulative("logit"), threshold="flexible")

Looking forward for your answers.

Thanks and regards.

Diva

Paul Buerkner

unread,
Sep 21, 2017, 4:07:10 AM9/21/17
to divas...@gmail.com, brms-users
Hi Diva,

Shouldn't it rather be (1|district)+(1|subdistrict:district)?

Also, I would recommend not using as.ordered or as.factor inside the formula, but earlier when preparing your data. It works inside the formula but it is not ideal.


1) You can check if a regression parameter is sig. different from zero, by looking at the lower and upper bound of the CI in the summary. If 0 ist NOT included in the 95%-CI, this is equivalent to sig. on a 5% level.

2) Your model converged perfectly. 4k eff samples means all your 4k (by default) MCMC samples are basically independent (which is perfect) and Rhat of 1 indicates convergence. Please look at vignette("brms_overview") for more details.

3) This looks reasonable at first glance. Do you expect the effects of some variables to vary by district or subdistrict? Reading vignette("brms_multilevel") might also help.

Best,
Paul

--
You received this message because you are subscribed to the Google Groups "brms-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.
To post to this group, send email to brms-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brms-users/8e5ba5c3-8c51-4c05-b63b-19ce2192856e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

divas...@gmail.com

unread,
Sep 21, 2017, 10:06:33 AM9/21/17
to brms-users
Hi Paul,

Thank you for your reply.

Actually, I have been fuzzy about the random part. Before, I have constructed two models, one with only (1|district) and individual-level variables and district-level variables; and the other with only (1|subdistrict) and individual-level variables and subdistrict-level variables. District-level variables and subdistrict-level variables have the same items like district-level sex-ratio and subdistrict-level sex-ratio. Just want to see in which scale these group-level variables would be more significant on the response score. But I have also reviewed some papers, they just applied one three-level model to explore their effects. So I don't know which one is better for me, 2 two-level model or 1 three-level model. Do you have some suggestions for me?

I have reviewed your paper, and it said " for instance, you write (1 | g1/g2), it will be expanded to (1 | g1) + (1 | g1:g2)". So I think as the scale of units is from individual to subdistrict then to district, if I use a three-level model, is it more reasonable to construct formula like (1|subdistrict)+(1|subdistrict:district)? What's the difference with (1|district)+(1|district:subdistrict)?Or what does that mean if I use (1|subdistrict)+(1|district)?

For 1), can I know more details about the significant level? about 5% or 1% or 0.1%?

For 3), yes, I will consider some variables that might vary by district or subdistrict, which means a random intercept and random slope model. Is it right to construct formula like this fit<-brm(score~health+gender+(1|subdistrict)+(health|subdistrict)+(1|subdistrict:district),data=comb.data1,family=cumulative("logit"), threshold="flexible")


Thanks and regards.

Diva

在 2017年9月21日星期四 UTC+8下午4:07:10,Paul Buerkner写道:
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+...@googlegroups.com.

Paul Buerkner

unread,
Sep 21, 2017, 1:17:38 PM9/21/17
to divas...@gmail.com, brms-users
Use one three-level model.

(1|subdistrict:district) and (1|district:subdistrict) implies actually the same in brms, but I would recommend the second one.

If (1|district:subdistrict) and (1|subdistrict) make any difference depends on whether subdistrict ist unique across districts or only within districts.

1) By default, the CIs are 95% CIs, but you can change this using argument "prob" in summary.

2) Include the random slopes in the same term as the intercept. For instance (1 + health | subdistrict)

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Sep 22, 2017, 1:53:31 AM9/22/17
to brms-users
Hi Paul,

Many thanks for your reply.

For one three-level model, as suggested, I constructed a formula including two random intercepts and one random coefficient like this:
 fit<-brm(score~health+gender+(1+ health | subdistrict)+(1+1|district:subdistrict),data=comb.data1,family=cumulative("logit"), threshold="flexible")

Is it right? If I want to add a subdistrict-level variable which is expected to vary across districts, could I build up a formula as below:
 fit2<-brm(score~health+gender+sub_sexratio+(1+ health | subdistrict)+(1+sub_sexratio|district:subdistrict),data=comb.data1,family=cumulative("logit"), threshold="flexible")

Thanks and regards.

Diva

在 2017年9月22日星期五 UTC+8上午1:17:38,Paul Buerkner写道:

Paul Buerkner

unread,
Sep 22, 2017, 5:51:11 AM9/22/17
to divas...@gmail.com, brms-users
(1+1|district:subdistrict) is equivalent to  (1|district:subdistrict), so I suggest wirting the latter.
 Also, please use don't use

(<something>|subdistrict) and (<something>|district:sub
district)

but instead

 (<something>|district) and (<something>|district:sub
district)

The first term is for stuff varying across districts, the second term is for stuff varying across subdistricts.


If you want, say, sex_ratio to vary by district go for

(1+sub_sexratio|district)

Does that make sense?


divas...@gmail.com

unread,
Sep 23, 2017, 12:19:37 AM9/23/17
to brms-users
Hi Paul,

I got it now. Another question is how many units at least at higher levels? I reviewed some materials that there should be at least 20 units at higher levels. But in my study I only have 14 district at the highest level. Can I still build up 3-level model? Thanks.

Diva

在 2017年9月22日星期五 UTC+8下午5:51:11,Paul Buerkner写道:

Paul Buerkner

unread,
Sep 23, 2017, 2:50:43 AM9/23/17
to divas...@gmail.com, brms-users
I think you can savely use it with only 14 levels.

--
You received this message because you are subscribed to the Google Groups "brms-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Sep 24, 2017, 5:30:01 AM9/24/17
to brms-users
Thanks Paul. I will use it carefully. I found that if I use summary() to output the brms' results, it shows "LOO=NA, WAIC=NA". And I need to use add.ic() to add the two ICs into the fitted results, and it should run for a while. So can I add some items in brm() or summary() to output the two ICs along with other results. Many thanks.

Regards.

Diva

在 2017年9月23日星期六 UTC+8下午2:50:43,Paul Buerkner写道:
I think you can savely use it with only 14 levels.
Am 23.09.2017 06:19 schrieb <divas...@gmail.com>:
Hi Paul,

I got it now. Another question is how many units at least at higher levels? I reviewed some materials that there should be at least 20 units at higher levels. But in my study I only have 14 district at the highest level. Can I still build up 3-level model? Thanks.

Diva

在 2017年9月22日星期五 UTC+8下午5:51:11,Paul Buerkner写道:
(1+1|district:subdistrict) is equivalent to  (1|district:subdistrict), so I suggest wirting the latter.
 Also, please use don't use

(<something>|subdistrict) and (<something>|district:sub
district)

but instead

 (<something>|district) and (<something>|district:sub
district)

The first term is for stuff varying across districts, the second term is for stuff varying across subdistricts.


If you want, say, sex_ratio to vary by district go for

(1+sub_sexratio|district)

Does that make sense?


--
You received this message because you are subscribed to the Google Groups "brms-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+...@googlegroups.com.

divas...@gmail.com

unread,
Sep 24, 2017, 5:42:50 AM9/24/17
to brms-users
Hi Paul,

I have found a solution for this. I can use summary(fit, loo=TRUE, waic=TRUE). I tried this before, but I think it's case sensitive. When I used lower case instead, it works.

Regards.

Diva

在 2017年9月24日星期日 UTC+8下午5:30:01,divas...@gmail.com写道:

Paul Buerkner

unread,
Sep 24, 2017, 6:31:12 AM9/24/17
to divas...@gmail.com, brms-users
I suggest using add_ic() rather than summary (..., loo = TRUE, ...) since in the former case loo or waic is only computed once and than stored in the model object rather than computed each time you call summary() or loo() or waic().

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 1, 2017, 9:47:55 AM10/1/17
to brms-users
Hi Paul,

I have a question about proportional odds (PO) assumption. How can I do parallel test in brms? And if I find some categorical predictors violate the PO assumption, how can I modify the function in brms?

For example, Y is an ordinal response variable with range [1:5], X is a categorical predictor with range [1:3], with common coefficient, Y~X+(1 | District). If the parallel test shows X violates the PO assumption and need to assign separate coefficients, how can I modify the function in brms? And can I test its random effect in brms (e.g. Y~X+(X | District) ) ?

P.S.: I have tried R2MLwiN package to do the parallel test. And with common coefficient, the function is logit(Y, cons, 5) ~ 1 + X[1:4] + (1[1:4] | District); with separate coefficients, the function is logit(Y, cons, 5) ~ 1 + X + (1[1:4] | District). 

Thanks and regards.

Diva



在 2017年9月24日星期日 UTC+8下午6:31:12,Paul Buerkner写道:

divas...@gmail.com

unread,
Oct 1, 2017, 12:57:16 PM10/1/17
to brms-users
Hi Paul,

I have another question. It is about the coefficients of group-level effects. I have applied brms as well as clmm packages to model a 2-level ordinal regression. 

For brms, the result shows as :
Group-Level Effects: 
~DISTRICT:SUBDISTRICT (Number of levels: 115) 
                  Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept)     2.22      0.19         1.88     2.63         2432       1.00

For clmm, the result shows as :
Random effects:
 Groups Name                Variance Std.Dev.
SUBDISTRICT(Intercept)    4.595    2.144   
Number of groups:  SUBDISTRICT 115

Therefore, just wander if sd(Intercept) in brms corresponds to Std.Dev of intercept in clmm. And if so, how can I extract the variance of intercept in brms?

Thanks and regards.

Diva  

在 2017年10月1日星期日 UTC+8下午9:47:55,divas...@gmail.com写道:

Paul Buerkner

unread,
Oct 2, 2017, 6:52:04 AM10/2/17
to divas...@gmail.com, brms-users
Regarding the PO question: I don't think you can do this test in brms and you can't fit separate coefficients (I call them category specific effects) with the cumulative() family, since it may lead to negative probabilities.

If you want to fit category specific effects, use family sratio(), cratio(), or acat() and write cs(X).

Regarding your 2. question: To get the means of all the random effects variances, you can use, for instance, colMeans(posterior_samples(<your model object>, pars = "^sd_"))


To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 3, 2017, 1:34:12 AM10/3/17
to brms-users
Hi Paul,

Thank you for your reply. I have reviewed some instructions of other packages in R. And I think a possible solution to explore the category specific coefficients is to build up several dummy variables (one for each category). That is to say, since X has four categories, then I could create x1, x2, x3, x4 as dummy variables and explore the coefficient of each of them through cumulative link logit model. Do you think it is a possible way? Many thanks.

Diva

在 2017年10月2日星期一 UTC+8下午6:52:04,Paul Buerkner写道:

divas...@gmail.com

unread,
Oct 3, 2017, 8:50:27 AM10/3/17
to brms-users
Hi Paul,

I think the solution I proposed can only be used when adding a categorical specific random slope. 

And I have used LOO to compare a random intercept model (RI) and a random slope random intercept model (RSRI) based on the same dataset, the result as below shows that SE is very large. I don't know if this comparison is valid or something wrong with my model. P.S.: Each RSRI model only considers random effects of one variable from Var1-5.

(RI — RSRI) LOOIC SE
Var1 -1.07772 8.220301
Var2 0.540573 6.018782
Var3 -4.536884 1.80831
Var4 0.689996 4.367917
Var5 41.3646 18.74543

Regards.

Diva

在 2017年10月3日星期二 UTC+8下午1:34:12,divas...@gmail.com写道:

Paul Buerkner

unread,
Oct 3, 2017, 9:36:18 AM10/3/17
to divas...@gmail.com, brms-users
Can you show me the complete code you used? Otherwise it is hard to tell what exactly you did.

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 3, 2017, 10:56:01 AM10/3/17
to brms-users
Hi Paul, 

The code for RI model is :

brmf0<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (1 | DISTRICT:SUBDISTRICT)")
fit0<-brm(brmf0, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)

The code for RSRI model is: (take Var1, Var2 as examples, actually doing a loop to explore the random effects of each Var)
brmf1<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (Var1 | DISTRICT:SUBDISTRICT)")
fit1<-brm(brmf1, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)
LOO(fit0, fit1)

fit0-fit1            LOOIC           SE
Var1             -1.07772       8.220301

brmf2<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (Var2 | DISTRICT:SUBDISTRICT)")
fit2<-brm(brmf2, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)
LOO(fit0, fit2)

fit0-fit2            LOOIC            SE
Var2              0.540573      6.018782

....
....
....


P.S.: Var1-Var5 are all categorical variables, Var1(four categories), Var2(two categories), Var3(two categories), Var4(two categories), Var5(six categories)

Thanks and regards.

Diva


Paul Buerkner

unread,
Oct 3, 2017, 4:11:02 PM10/3/17
to divas...@gmail.com, brms-users
Ok thanks. So please can you tell me once more what kind of model you want to fit and what you mean by your category specific effects in your case? Currently i am having trouble giving advice.

--
You received this message because you are subscribed to the Google Groups "brms-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.
To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 4, 2017, 10:08:48 AM10/4/17
to brms-users
Hi Paul,

Thanks for your reply. The model I used in brms is cumulative link logit model. And as you mentioned before, I think it's impossible to model category specific effects because it may cause negative probability. And now my question is why the SE is so large when I use LOO( ) to compare the slope random random intercept (RSRI) model and the only random intercept (RI) model. Thanks. The two models I compared are as below:

The code for RI model is :

brmf0<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (1 | DISTRICT:SUBDISTRICT)")
fit0<-brm(brmf0, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)

The code for RSRI model is: (take Var1, Var2 as examples, actually doing a loop to explore the random effects of each Var)
brmf1<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (Var1 | DISTRICT:SUBDISTRICT)")
fit1<-brm(brmf1, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)
LOO(fit0, fit1)

fit0-fit1            LOOIC           SE
Var1             -1.07772       8.220301

brmf2<-bf("Response ~ Var1 + Var2 + Var3 + Var4 + Var5 + (Var2 | DISTRICT:SUBDISTRICT)")
fit2<-brm(brmf2, data = mydata, family=cumulative("logit"), threshold="flexible", control = list(adapt_delta = 0.9), iter=5000, warmup=1000, thin=3)
LOO(fit0, fit2)

fit0-fit2            LOOIC            SE
Var2              0.540573      6.018782

....
....
....


P.S.: Var1-Var5 are all categorical variables, Var1(four categories), Var2(two categories), Var3(two categories), Var4(two categories), Var5(six categories)

Regards.

Diva

在 2017年10月4日星期三 UTC+8上午4:11:02,Paul Buerkner写道:
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+...@googlegroups.com.

Paul Buerkner

unread,
Oct 4, 2017, 10:41:07 AM10/4/17
to divas...@gmail.com, brms-users
Whether an SE of 8 (or 6) can be considered as large or not depends on the number of observations in your model.

What you see from the output is that there is basically no difference in model fit (as measured by LOO) when comparing those two models, so you might as well go with the model without the random slope if you like. If you have enough data, including the slope won't hurt you, so you might as well include it.

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 5, 2017, 8:03:31 AM10/5/17
to brms-users
Hi Paul,

Thank you for your answer. In my view, the smaller the LOOIC, the better the model fits. I don't know really understand why you said the SE should depends on my observation number. In my study, the observation number is around 1000. And just wander if there is some index (like p-value when comparing the values of -2LL of two models) could tell me whether the random coefficient is significant or not. 

And one LOOIC difference is 41.3646 with SE=18.74543, I don't know if I can ignore its random effects.

fit0-fit5            LOOIC            SE
Var5              41.3646       18.74543

I also wander about another question I proposed before, you suggested me to use colMeans(posterior_samples(<your model object>, pars = "^sd_")) get the means of all the random effects variances. However, I tried it and the results seem to be the same with function summary( ). And you could see from the results of brms and clmm based on the same dataset and the same model, the Estimates of random effects are so much different. And I have also applied another software MLwiN (PQL2 & MCMC) to fit the model, the result appears similar to clmm, but different from brms. Therefore, just wander if the meaning of sd(Intercept) (2.22) is the same with Variance (4.595) in clmm. If they are different, how can I calculate the random effects in brms? If they are the same, why they generate so much difference? Actually, I need to compare the performances of different packages or softwares and then to make my results more convincing.

For brms, the result shows as :
Group-Level Effects: 
~DISTRICT:SUBDISTRICT (Number of levels: 115) 
                  Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept)     2.22      0.19         1.88     2.63         2432       1.00

For clmm, the result shows as :
Random effects:
 Groups Name                Variance Std.Dev.
SUBDISTRICT(Intercept)    4.595    2.144   
Number of groups:  SUBDISTRICT 115

Thanks and regards.

Diva



在 2017年10月4日星期三 UTC+8下午10:41:07,Paul Buerkner写道:

Paul Buerkner

unread,
Oct 5, 2017, 10:50:43 AM10/5/17
to divas...@gmail.com, brms-users
There is no p-value for LOO comparisons and no "significance" either. If desparately want to have some threshold, you could use 2*SEs as a rule of thumb. So whenver a difference is 2*SE (of the difference) away from zero it is something substantial, but please don't overstate this rule.

In your case you have a difference of ~41 with an SE of ~19 so according to our rule, the model including the random effects (fit5) clearly fits better.

With regard to the SD / Var question, I made a mistake in my code. It should be

colMeans(posterior_samples(<your model object>, pars = "^sd_")^2)





To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Oct 6, 2017, 1:30:35 AM10/6/17
to brms-users
Hi Paul,

Many thanks for your reply. As you mentioned a rule of thumb, is there any related literature? I just want to use them as references. Many thanks.

Regards,

Jinglu

在 2017年10月5日星期四 UTC+8下午10:50:43,Paul Buerkner写道:

Paul Buerkner

unread,
Oct 6, 2017, 4:17:09 AM10/6/17
to divas...@gmail.com, brms-users
There is one sentence about the asymptotic normality of LOO in the loo paper of aki vehtari, which motivated my statement but there is not definitive statement that I am aware of (and there should be none) ;-)

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Dec 16, 2017, 12:29:56 PM12/16/17
to brms-users
Dear Paul,

As before, you told me that to calculate the variance of intercept, the code should be: colMeans(posterior_samples(<model object>, pars = "^sd_")^2). Just wander how to calculate the "Est.Err", "l-95% CI", "u-95% CI" of intercept variance, can I use the code below to calculate "I-95% CI" and "u-95% CI":

posterior_interval(<model object>, pars = "^sd_")^2

But I yet have no idea about the "Est.Err" of intercept variance. Could you give me some suggestions?

Thanks and regards.

Diva

在 2017年10月5日星期四 UTC+8下午10:50:43,Paul Buerkner写道:
There is no p-value for LOO comparisons and no "significance" either. If desparately want to have some threshold, you could use 2*SEs as a rule of thumb. So whenver a difference is 2*SE (of the difference) away from zero it is something substantial, but please don't overstate this rule.

Paul Buerkner

unread,
Dec 16, 2017, 6:56:27 PM12/16/17
to divas...@gmail.com, brms-users
vars <- posterior_samples(<model object>, pars = "^sd_")^2

provides you with the posterior samples of the variances as a data.frame. For the estiamte use colMeans or

apply(vars, 2, mean)

For the est.Error use

apply(vars, 2, sd)

and for the CIs use

apply(vars, 2, quantile, probs = <your probs>)

Alternatively, you might just use the internal function 'get_summary'

brms:::get_summary(vars)


To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Dec 17, 2017, 4:41:29 AM12/17/17
to brms-users
Dear Paul,

Thanks for your prompt reply. And I reviewed some literature and found they applied median values rather than mean values to estimate the effects of variables. So when interpreting the results of brms, which one, median or mean, do you think is more appropriate for multi-level binary models? The reference could be seen as below:


Thanks and regards.

Diva

在 2017年12月17日星期日 UTC+8上午7:56:27,Paul Buerkner写道:

Paul Buerkner

unread,
Dec 17, 2017, 5:16:33 AM12/17/17
to divas...@gmail.com, brms-users
Both is fine. The mean is often considered **the** Bayesian estimate, but the median has the nice property of being equivariant under monoton transformations.

You can compute the median instead of the mean in get_summary via

brms:::get_summary(vars, robust = TRUE)

To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.

To post to this group, send email to brms-...@googlegroups.com.

divas...@gmail.com

unread,
Dec 19, 2017, 6:39:01 AM12/19/17
to brms-users
Okay, I got it. Thanks for your reply. 

Diva

在 2017年12月17日星期日 UTC+8下午6:16:33,Paul Buerkner写道:
Reply all
Reply to author
Forward
0 new messages