I couldn't understand that paper's prior structure on m[t] (I do
get that gamma-Poisson is negative binomial).
Do you just want
for (t in 2:T)
y[t] ~ Poisson(exp(rho * y[t - 1] + (1 - rho) * (a + b * t)));
What is
y[1] ~ ?
Or do you just not want to model it?
With some reasonable priors on rho, a, and b?
- Bob
P.S. I dug through that paper you sent and found my way to Chib and Winkelmann,
whose language I understood and whose example I managed to recreate, but
it's multivariate rather than a time series. I'll write up a bunch of these
examples as I understand them. Here's that example, followed by R code (yes, it
really needs tree depth 12).
/**
* Likelihood derived from the following paper:
*
* Markov Chain Monte Carlo Analysis of Correlated Count Data
* Author(s): Siddhartha Chib and Rainer Winkelmann
* Source: Journal of Business & Economic Statistics,
* Vol. 19, No. 4 (Oct., 2001), pp. 428-435
* Stable URL:
http://www.jstor.org/stable/1392277
*
* Notes:
* -- replaced Wishart and normal priors
* -- assume each outcome J has the same covariates (if not,
* can inefficiently pad with zeros)
*/
data {
int<lower=0> I; // subjects
int<lower=0> J; // outcomes
int<lower=0> K; // covariates
int y[I, J]; // observed counts
matrix[I, K] x; // covariates
}
transformed data {
vector[J] zeros; // mean noise
zeros <- rep_vector(0, J);
}
parameters {
cholesky_factor_corr[J] L_Omega; // outcome correlations
vector<lower=0>[J] sigma; // outcome scale
matrix[K, J] beta; // regression coefficients
row_vector[J] b[I]; // noise terms
}
model {
// priors
L_Omega ~ lkj_corr_cholesky(4);
sigma ~ normal(0, 2);
to_vector(beta) ~ normal(0, 2);
b ~ multi_normal_cholesky(zeros, diag_pre_multiply(sigma, L_Omega));
// likelihood
for (i in 1:I)
y[i] ~ poisson_log(x[i] * beta + b[i]);
}
==========================
beta <- matrix(rnorm(K*J, 0.5, 2), K, J);
b <- matrix(NA, I, J);
for (i in 1:I)
b[i, ] <- mvrnorm(1, rep(0, J), Sigma);
y <- matrix(0, I, J);
for (i in 1:I) {
for (j in 1:J) {
log_lambda <- sum(x[i,] * beta[,j]) + b[i,j];
y[i, j] <- rpois(1, exp(log_lambda));
}
}
library(rstan);
fit <- stan("chib-winkelmann.stan", data=c("I", "J", "K", "y", "x"),
iter=1000, refresh=1, chains=1, init=0.5, fit=fit,
control=list(stepsize=0.02, adapt_delta=0.90, max_treedepth=12));
> <AR1.stan><AR1.png><AR2.stan><AR2.png>