There's a discussion in the Gelman and Hill regression book and
the conclusion is that it's primarily a matter of convenience.
The primary difference is in the scale.
With Stan, the inv_logit() function is much more efficient than
Phi and also more robust to outliers. The Phi_approx() function
is close to Phi() and more efficient.
The logistic distribution is easy to understand as a log-odds (logit)
transofrm of a uniform(0, 1) variable.
In some Gibbs applications, it was easier to do probit because
the conjugate structure of the latent normal structure could be
exploited to simplify some computations (at least I'm pretty sure
that's why Probit is so popular).
- Bob
> On Nov 14, 2016, at 12:08 PM, Stephen Martin <
hwki...@gmail.com> wrote:
>
> Hey all,
>
> Yet another post by me. I thought I posted this last night but I don't think it actually sent.
>
> I have a model that makes pretty extensive use of logistic (and multinomial/softmax logistic) regression, or parameterization in terms of log-odds that is transformed to probabilities. I've tried my best to use stan functions that are more efficient (e.g., log_inv_logit(pi_logit[n]) for cluster membership probabilities), but I'm curious whether the Stan developers would recommend using probit as opposed to logit? Is one more efficient than the other? I know Phi() was created as an efficient probit function for this sort of thing. Reparameterizing in terms of probits would be a hefty task, so before I tried it out, I was just curious what the Stan devs (or other users) think about the efficiency of Phi() vs inv_logit link functions.
>
> Thanks,
> --Stephen
>
> --
> You received this message because you are subscribed to the Google Groups "Stan users mailing list" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
stan-users+...@googlegroups.com.
> To post to this group, send email to
stan-...@googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.