There's a note in the install instructions on how to
set the compiler flag to -O3 for more optimization.
It makes a big difference, even compared to the default
-O2 in R's build.
On 9/6/12 3:14 PM, Sean J Taylor wrote:
> Hey Bob,
>
> Thanks for the quick reply. I'm running all of these using rstan, so I'm letting R take care of compiling and tuning
> parameters.
>
> 1. n_eff reported by summary(stan.fit) is ~1000 for every variable, except the constant, where it's ~40.
Right, so this means the constant is harder to sample.
You have more effective samples than you probably need
for the other variables.
Any idea of the effective sample size under JAGS?
You can get it using R2jags (which was itself based
on R2WinBUGs and in turn formed the basis for RStan;
having said that, we compute ESS and R-hat a bit differently
in RStan).
What matters for speed is n_eff / wall time.
If you converge very quickly, you should only need to do that
much warmup (or a bit more to be safe) in subsequent runs.
> 2. Unsure what rstan is doing here.
Where? The manual explains how we calculate ESS and
R-hat in Stan.
> 3. Again, I'm just using defaults. Not sure if there's something smarter I could be doing.
You can get faster convergence with reasonable initialization.
But it still makes sense to have dispersed starting points
to make sure the MCMC is working.
> 4. Looking at the traceplots, the chains seems to converge quite quickly (within the first hundred iterations).
That's good news.
> I just swapped in the bernoulli_logit function, tightened up the priors on my betas, and replaced the tau prior with
> uniform(0, 10), but there was no significant speedup.
The bernoulli_logit's faster, but even the basic bernoulli's
pretty fast. The big advantage is increased arithmetic
stability as it gets around potential overflow and underflow
in applying exp() to the linear predictor.
>
> I'm trying out Matt's reparameterization right now, will report the results here in a bit.
Thanks! We need to put more of these tips in the doc.
- Bob