We have been working on a model where our response variable can't be negative. We were using a normal distribution in the model development stage, but a truncated normal is a better match for the support our data.
When we fit the non-truncated normal, we get an estimate of the mean that makes sense given the data. However, when we fit the truncated normal, the posterior samples for the mean are all pushed up against zero, the mean doesn't represent the mean of the data, and in more complex cases this produces some very abnormal posterior distributions.
Maybe this is a feature of the truncated normal, but I wouldn't have expected this behavior given that the posterior using the non-truncated normal wasn't ever dipping close to zero (or close to even the mean estimated by the truncated normal).
mu.n ~ dunif(0.0001,8)
sd~dinvgamma(0.001, 0.001)
for (i in 1:T){
data[i] ~ T(dnorm(mu.n,sd=sd),0,1)
# data[i] ~ dnorm(mu.n,sd=sd)
}
})
mod.inits <- list(mu.n = c(1), sd = c(0.5))
data<-c(0.013,0.077,0.069,0.055,0.176,0.03,0.097,0.097,0.108,0.006,0.016,0.016,0.125,0.015,0.004,0.013,0.022,0.042,0.032,0.035,0.017,0.031,0.032,0.564,0.01,0.023,0.017,0.123,0.387,0.097,0.147,0.11,0.265,0.114,0.003,0.053,0.062,0.012,0.087,0.034,0.066,0.075,0.162,0.017,0.121,0.276,0.01,0.575,0.205,0.008)
mod.data <- list(data = data)
mod.consts <- list(T = length(data))
results<-nimbleMCMC(code=modfile,constants=mod.consts,data=mod.data,inits=mod.inits,niter=50000,nburnin =25000)
plot(results[,1])