Prior on transformed parameters?

929 views
Skip to first unread message

Sebastian Weber

unread,
Apr 25, 2013, 5:47:58 AM4/25/13
to stan-...@googlegroups.com
Hi!

I just found out by accident that I put a prior on a transformed parameter. Stan did not complain about it. Is this really a desired behavior? I would have expected that I am only allowed to put an explicit prior onto the parameters (but not the transformed parameters) or am I mistaken here?

Best,
Sebastian

Bob Carpenter

unread,
Apr 25, 2013, 12:50:13 PM4/25/13
to stan-...@googlegroups.com
Stan lets you write whatever you want. The statement

a ~ normal(mu,sigma);

just translates to:

lp__ <- lp__ + normal_log(a,mu,sigma);

If you use a sampling statement with ~ and the element
on the left is a transformed parameter, then you have
to add the (log) Jacobian adjustment to lp__.

I'm going to go in right now and add a warning to that
effect in the parser --- we've been meaning to do that
for ages.

There's an example in the manual chapter on defining
new distributions where we show how to do this with
lognormal.

- Bob
> --
> You received this message because you are subscribed to the Google Groups "stan users mailing list" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to stan-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Sebastian Weber

unread,
Apr 26, 2013, 2:56:14 AM4/26/13
to stan-...@googlegroups.com
Thanks Bob!

A warning here is very helpful to catch unwanted mistakes. I understand that putting a prior on a parameter or transformed parameter makes conceptually no difference and both approaches can result in a properly setup model. However, to me it looks like the policy should be that priors are on parameters only. So many thanks for a warning here - it will keep at least me from making my own mistakes...

Best,
Sebastian

Ben Goodrich

unread,
Apr 26, 2013, 10:25:44 AM4/26/13
to stan-...@googlegroups.com
Just as a point of emphasis for anyone who might stumble upon this thread. I like to say that the (possibly uniform) priors are always put on the parameters and the distinction is whether they are put on the parameters directly or indirectly via prior on a function of a parameter and an adjustment for the change-of-variables. Although it is easy to make mistakes here, it is a useful feature of Stan.

Ben

Bob Carpenter

unread,
Apr 26, 2013, 1:28:38 PM4/26/13
to stan-...@googlegroups.com
On 4/26/13 2:56 AM, Sebastian Weber wrote:
> Thanks Bob!
>
> A warning here is very helpful to catch unwanted mistakes.

Right -- we'd been meaning to include one for ages but
I only recently added the necessary information to the parser
to do the check.

> I understand that putting a prior on a parameter or
> transformed parameter makes conceptually no difference and both approaches can result in a properly setup model.
> However, to me it looks like the policy should be that priors are on parameters only. So many thanks for a warning here
> - it will keep at least me from making my own mistakes...

Under the hood, a Stan model defines an unnormalized
conditional probability of parameters given data.

Sometimes it's convenient to define a transformation
of a parameter and put the prior on that or vice-versa.

For example, suppose you have a Dirichlet variate
K-vector alpha, with alpha[k] > 0. You can reparameterize
in terms of a K-simplex theta and a total count alpha_sum > 0.

alpha = alpha_sum * theta

alpha_sum = sum(alpha)

theta = alpha / sum(alpha)

Now suppose we want to express the prior on the transformed
parameters:

alpha_sum ~ exp(a)
theta ~ dirichlet(beta);

We can do this in Stan with either (alpha) or (alpha_sum,theta)
as the declared parameters. If we take alpha to be the declared
parameters, we need to apply a Jacobian adjustment. If (alpha_sum,theta)
are the declared parameters, we define alpha in terms of them and don't
need an adjustment.

- Bob
Reply all
Reply to author
Forward
0 new messages