The original JAGS code from Kruschke is in
archive: DBDA2Eprograms.zip
file: Jags-Ydich-Xnom1subj-MbernBetaModelComp.R
URL:
https://sites.google.com/site/doingbayesiandataanalysis/software-installation
Here's the JAGS model:
model {
for ( i in 1:N ) {
y[i] ~ dbern( theta )
}
theta ~ dbeta( omega[m]*(kappa-2)+1 , (1-omega[m])*(kappa-2)+1 )
omega[1] <- .25
omega[2] <- .75
kappa <- 12
m ~ dcat( mPriorProb[] )
mPriorProb[1] <- .5
mPriorProb[2] <- .5
}
And here's the Stan version (which just evaluates all those
constant expressions and plugs them in):
data {
int<lower=0> N;
int<lower=0, upper=1> y[N];
}
parameters {
real<lower=0, upper=1> theta;
}
model {
y ~ bernoulli(theta);
target += log_mix(0.5, beta_lpdf(theta | 2.75, 8.25),
beta_lpdf(theta | 8.25, 2.75));
}
As seen from Gelman's and Kruschke's comments, "contrivance" is often
in the eye of the beholder. I'm with Kruschke on this one (though I
wouldn't have written the JAGS code with all those constants) but
that may be because I learned Bayesian stats through BUGS.
Here's the fit from RStan 2.11.1 (remember, don't use 2.10!)
using all default settings (the first three lines are from Krushcke's
JAGS example in the file linked above):
> N=9
> z=6
> y = c(rep(0, N - z), rep(1, z));
> fit <- stan("kruschke-db.stan", data=c("y", "N"));
> print(fit)
Inference for Stan model: kruschke-db.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
theta 0.67 0.00 0.14 0.32 0.6 0.70 0.77 0.87 1409 1
lp__ -7.69 0.02 0.77 -9.69 -8.0 -7.38 -7.14 -7.09 2089 1
For comparison, here's the fit you get with a uniform prior on
theta, also using all default settings:
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
theta 0.64 0.00 0.14 0.34 0.55 0.64 0.74 0.88 2639 1
- Bob
P.S. Here are the Stan files if you want to run yourself: