On 14-07-07 04:11 PM, Bob Carpenter wrote:
> Given that you have Stan, you can try all of these
> models and look at the answers. Be sure to run enough
> samples to get good estimates of the tail intervals and
> variances.
>
> - Bob
Yes, this should have been the first thing I should have done!
I've confirmed what you all said and came to understand things better:
parameterizing a gaussian using the precision with a gamma prior
expresses the same uncertainty as parameterizing a gaussian using the
variance with an inverse-gamma prior. Sorry to state the obvious, I
guess that's what inverse gamma is for in the first place.
Thanks very much.
And here's goes the code and results in case anyone is interested:
require(rstan)
model_code1 <- "parameters {
real <lower=0> tau; //precision
real x;
}
transformed parameters {
real <lower=0> sigma2; // variance
real <lower=0> sigma; //standard deviation
sigma2 <- 1/tau;
sigma <- sqrt(sigma2);
}
model {
x ~ normal(0, sigma); //likelihood
tau ~ gamma(1, 1); //prior
}"
model_code2 <- "parameters {
real <lower=0> tau; //precision
real x;
}
transformed parameters {
real <lower=0> sigma2; // variance
real <lower=0> sigma; //standard deviation
sigma2 <- 1/tau;
sigma <- sqrt(sigma2);
}
model {
x ~ normal(0, sigma);
sigma2 ~ inv_gamma(1, 1);
}"
model_code3 <- "parameters {
real <lower=0> sigma2; // variance
real x;
}
transformed parameters {
real <lower=0> sigma; //standard deviation
sigma <- sqrt(sigma2);
}
model {
x ~ normal(0, sigma);
sigma2 ~ inv_gamma(1, 1);
}"
stan(model_code=model_code1, iter=100000) -> fit1
stan(model_code=model_code2, iter=100000) -> fit2
stan(model_code=model_code3, iter=100000) -> fit3
> fit1
Inference for Stan model: model_code1.
4 chains, each with iter=1e+05; warmup=50000; thin=1;
post-warmup draws per chain=50000, total post-warmup draws=2e+05.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
tau 1.01 0.01 1.01 0.03 0.29 0.70 1.41 3.71 28693 1
x -0.01 0.03 3.01 -4.31 -0.82 -0.01 0.80 4.17 10804 1
sigma2 11.01 2.72 608.53 0.27 0.71 1.42 3.41 39.36 50107 1
sigma 1.75 0.02 2.82 0.52 0.84 1.19 1.85 6.27 12727 1
lp__ -2.36 0.01 1.48 -6.36 -2.92 -1.90 -1.31 -0.93 15222 1
> fit2
Inference for Stan model: model_code2.
4 chains, each with iter=1e+05; warmup=50000; thin=1;
post-warmup draws per chain=50000, total post-warmup draws=2e+05.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
tau 3.00 0.01 1.72 0.63 1.74 2.68 3.92 7.19 70603 1
x 0.00 0.00 0.71 -1.41 -0.41 0.00 0.41 1.40 54350 1
sigma2 0.50 0.00 0.50 0.14 0.26 0.37 0.57 1.60 43959 1
sigma 0.66 0.00 0.24 0.37 0.51 0.61 0.76 1.26 51083 1
lp__ -0.26 0.01 1.16 -3.34 -0.69 0.10 0.56 0.86 41528 1
> fit3
Inference for Stan model: model_code3.
4 chains, each with iter=1e+05; warmup=50000; thin=1;
post-warmup draws per chain=50000, total post-warmup draws=2e+05.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
sigma2 17.51 8.42 872.01 0.27 0.71 1.41 3.38 36.93 10724 1
x 0.09 0.06 4.33 -4.16 -0.80 0.00 0.80 4.28 5314 1
sigma 1.76 0.05 3.79 0.52 0.84 1.19 1.84 6.08 6399 1
lp__ -2.34 0.01 1.47 -6.27 -2.90 -1.89 -1.30 -0.93 14549 1
(x of fit1 and fit3 were pretty similar in the tails)