line 1: model <- map2stan(
line 2: alist(
line 3: bernouilli_outcome ~ dbinom(1,p), #trial size is set to 1, with probability of success p
line 4: logit(p) <- a + a_olre[olre], #a_olre is a random intercept by observation, aka each row of the data(observation level random effect, olre)
line 5: a ~ dnorm(0,10), # fixed a intercept prior
line 6: a_olre[olre] ~ dnorm(0,1)), #prior for olre ***
line 7: data=list(your data), start=list(starting values), iter=some number, warmup=some number)
***At line 6, because olre variance cannot be estimated for nomial data in which trial size is 1, we set the standard deviation of the olre prior to be "1" (instead of "a_olre[olre] ~ dnorm(0, sigma_olre)", with another prior for sigma_olre). Is there a way to set this olre standard deviation within brms? Or does brms handle this internally and set it to zero (as lme4 does) or some other value?
Likewise, for a bernoulli model (family=categorical) in MCMCglmm, in which there are no random effects except for the observation level intercept, one could encode a prior as follows (only showing the olre part of the prior):
prior=list(R = list(V = 1, fix=1)). This prior fixes the olre variance to the value specified by V, in this case 1.
Within brms, I would imagine setting a fixed olre prior somewhat like below:
First, assuming the formula looks something like: formula=outcome ~ 1 + (1|olre), data=data, family=bernoulli("logit")
The prior for the olre could be:
prior=c(set_prior("normal(1,0)", class="sd", group="olre"))
But under this method, a test model I have run still estimates the olre variance, rather than fixes it to the chosen value.
I'd greatly appreciate any help or thoughts you may have!
Thanks,
Eric
--
You received this message because you are subscribed to the Google Groups "brms-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.
To post to this group, send email to brms-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brms-users/53984090-e4ce-437f-b0e0-7c0aaa5346b3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to brms-users+unsubscribe@googlegroups.com.
To post to this group, send email to brms-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brms-users/d18804a2-38e8-46f8-a897-975d5cb71e03%40googlegroups.com.
Right, I see in the 2010 paper that N&S discuss beta-binomial outcomes and repeatability. I had forgotten. Thanks for pointing that out.
I understand your point that setting omega = 1 is the same as sigma_e = 0 in an additive model, and that we could do so for binary data since omega and sigma_e are unidentifiable in such data. For me, the question of why we should do so depends on the word unidentifiable. It seems that if we set omega=1 (or sigma_e=0), then aren’t we assuming that there is no overdispersion? But in fact, there may be, we just cannot estimate it for binary data. By setting omega > 1 or sigma_e > 0, we instead assume that there is overdispersion, and acknowledge that we cannot actually estimate it, and so we set it to a user-chosen value.
I sound as if I’m taking a strong stand in favor of setting sigma_e > 0. But this is an ongoing question for me. I don’t have a great reason for choosing sigma_e > 0 as opposed to sigma_e = 0, other than the subjective argument above.
-Eric