Empirical Bayes?

425 views
Skip to first unread message

Markian Pahuta

unread,
Jun 22, 2014, 6:22:49 PM6/22/14
to stan-...@googlegroups.com
Hi,

I've posed a question about implementing empirical bayesian anaysis on the Stats Exchange website: 

If someone knows how to do this I'd really appreciate the help

thanks

mark

Andrew Gelman

unread,
Jun 23, 2014, 3:38:13 PM6/23/14
to stan-...@googlegroups.com
Hi all.  I followed the link below and found this comment from a user named “guy”:

"I haven't see much evidence that using STAN results in much better inference than JAGS in many situations. And for these simply conjugate models where JAGS can figure out the blocking structure and do moderately intelligent updates there doesn't seem much point to me. Last I checked STAN required all sorts of weird tricks to get working on mildly complicated models, and even then it doesn’t necessarily work (e.g. the rejection rate for a Bayesian neural net was absurd when I tried it)”

I don’t know how to send email to users on that site, but could someone contact this person and ask him to supply the examples where Stan doesn’t work but Jags does?  That would be helpful to us.

P.S. to Mark who asked the question below:  You’ll want a hierarchical model with weakly informative priors for mu and sigma.

--
You received this message because you are subscribed to the Google Groups "Stan users mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stan-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Markian Pahuta

unread,
Jun 23, 2014, 4:53:44 PM6/23/14
to stan-...@googlegroups.com
Hi,

What I am trying to do is explore the influence of priors on the analysis of study results. Specifically this is what I wanted to examine wrt medical studies:
1) RCT, less biased for treatment effects, possibly underestimate adverse events
2) Observational studies, biased for treatment effects, better estimate of adverse events

So I wanted to use different priors in situations 1 and 2:
1) RCT: 
- empirical bayesian prior (with goal of being mildly informative to give more weight to study data) for mean
- default uninformative prior for variance
2) Observational: 
- default uninformative prior for mean
- empirical bayesian mildly informative prior for variance

I was hoping for some guidelines on how to achieve these different priors?


--
You received this message because you are subscribed to a topic in the Google Groups "Stan users mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/stan-users/drrQCV1xjMc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stan-users+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
--- Mark

Andrew Gelman

unread,
Jun 23, 2014, 4:58:26 PM6/23/14
to stan-...@googlegroups.com
Hi, typically a uniform prior is a reasonable noninformative choice as long as there are enough groups; we discuss this a bit in BDA.  I don’t really know what you mean by an empirical Bayesian mildly infomrative prior.  Empirical Bayes typically inplies a weak prior on the hyperparameter so that the hyperparameter is estimated from data.

Markian Pahuta

unread,
Jun 23, 2014, 5:06:11 PM6/23/14
to stan-...@googlegroups.com
OK, I probably don't understand the concept of empirical bayes priors well.

What I want is to have more informative priors that give more weight to the data in certain circumstance for both the mean and variance. Is there any "default" way to do this?

Andrew Gelman

unread,
Jun 23, 2014, 5:19:54 PM6/23/14
to stan-...@googlegroups.com
It’s not a matter of giving more weight to the data, I think.  If you have a weak prior on the hyperparameters, then the posterior will be determined by the data.

Michael Betancourt

unread,
Jun 23, 2014, 5:19:57 PM6/23/14
to stan-...@googlegroups.com

> OK, I probably don't understand the concept of empirical bayes priors well.
>
> What I want is to have more informative priors that give more weight to the data in certain circumstance for both the mean and variance. Is there any "default" way to do this?

Not in the way you’re describing.

Empirical Bayes tries to use the data twice — once to define the prior and once to define the likelihood. This
leads to overfitting and poor estimation in complicated models.

But Empirical Bayes is just an estimate of the full hierarchical model to which Andrew was referring. In fact
you can think about Empirical Bayes as building a full hierarchical model and then making point estimates of
the hyper parameters. The point estimates lead to underestimated uncertainty… just as we would expect
from overfitting.

To summarize, Empirical Bayes is an approximation of a full hierarchical model. Any kind of “automated” or
“default” approach would require building the full model, with all those hyper priors, and then making point
estimates of the hyper parameters with something like maximum marginal likelihood, which has not yet
been implemented in Stan.

Andrew Gelman

unread,
Jun 23, 2014, 5:23:54 PM6/23/14
to stan-...@googlegroups.com
Actually, “empirical Bayes” isn’t really clearly defined. But otherwise I agree with everything you write. In BDA we have an index entry that’s something like, “Empirical Bayes, why we avoid the term”

On Jun 23, 2014, at 11:18 PM, Michael Betancourt <betan...@gmail.com> wrote:

>
>> OK, I probably don't understand the concept of empirical bayes priors well.
>>
>> What I want is to have more informative priors that give more weight to the data in certain circumstance for both the mean and variance. Is there any "default" way to do this?
>
> Not in the way you're describing.
>
> Empirical Bayes tries to use the data twice -- once to define the prior and once to define the likelihood. This
> leads to overfitting and poor estimation in complicated models.
>
> But Empirical Bayes is just an estimate of the full hierarchical model to which Andrew was referring. In fact
> you can think about Empirical Bayes as building a full hierarchical model and then making point estimates of
> the hyper parameters. The point estimates lead to underestimated uncertainty... just as we would expect
> from overfitting.
>
> To summarize, Empirical Bayes is an approximation of a full hierarchical model. Any kind of "automated" or
> "default" approach would require building the full model, with all those hyper priors, and then making point
> estimates of the hyper parameters with something like maximum marginal likelihood, which has not yet
> been implemented in Stan.
>

Michael Betancourt

unread,
Jun 23, 2014, 5:26:35 PM6/23/14
to stan-...@googlegroups.com
Agreed — it’s a terrible name that implies that Bayes isn’t empirical.

Markian Pahuta

unread,
Jun 23, 2014, 5:36:23 PM6/23/14
to stan-...@googlegroups.com
My understanding is that given a uniform prior, the shape of the posterior will be identical to the shape of the likelihood. I think this is in agreement with previous comments.

So perhaps a better way to approach my problem would be to use a a "skeptical prior." I could use the approach in spiegehalter's book (http://ca.wiley.com/WileyCDA/WileyTitle/productCd-0471499757.html) [page 158].

What would be a good way to develop a skeptical prior for the variance (ie that I believe the variance to be greater than what the data suggest)?


On Mon, Jun 23, 2014 at 5:26 PM, Michael Betancourt <betan...@gmail.com> wrote:
Agreed -- it's a terrible name that implies that Bayes isn't empirical.
You received this message because you are subscribed to a topic in the Google Groups "Stan users mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/stan-users/drrQCV1xjMc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stan-users+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
--- Mark
Reply all
Reply to author
Forward
0 new messages