SARMA(0,1)(1,1)[12]

90 views
Skip to first unread message

Alice Milivinti

unread,
Jul 12, 2016, 10:20:11 AM7/12/16
to Stan users mailing list


Hi,


I am not an experienced rstan user and I am trying to fit the following Seasonal ARMA model  (SARMA(0,1)(1,1)[12]) :

.

I read the manual's section "Time-Series Models", but I was not able to fit the Seasonal "jump" of the Moving Average part (the last two terms of the equation) .
Thanks!
Regards,

Alice 

Daniel Lee

unread,
Jul 12, 2016, 10:43:10 AM7/12/16
to stan-users mailing list
Hi Alice, 

Do you have an actual Stan program that you're trying to fit? It's hard to help when there isn't at least a Stan program and some simulated data.


Daniel


--
You received this message because you are subscribed to the Google Groups "Stan users mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stan-users+...@googlegroups.com.
To post to this group, send email to stan-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alice Milivinti

unread,
Jul 13, 2016, 6:45:35 AM7/13/16
to stan-...@googlegroups.com

Hi,
You are right. My first attempt looks like this, but I am sure the syntax is very "dirty"


data {
int<lower=1> T; // num observations
real y[T]; // observed outputs
}
parameters {
real mu; // mean coeff
real phi1; // autoregression coeff
real phi2; // autoregression coeff
vector[2] theta; // moving avg coeff
real<lower=0> sigma; // noise scale
}
model {
vector[T] nu; // prediction for time t
vector[T] err; // error for time t

nu[1] <- mu + phi1 * mu + phi2 * mu; // assume err[0] == 0
err[1] <- y[1] - nu[1];
nu[2] <- mu + phi1 * mu + phi2 * mu;
err[2] <- y[2] - nu[2] - theta[1] * err[1];
nu[3] <- mu + phi1 * mu + phi2 * mu;
err[3] <- y[3] - nu[3] - theta[1] * err[2];
nu[4] <- mu + phi1 * mu + phi2 * mu;
err[4] <- y[4] - nu[4] - theta[1] * err[3];
nu[5] <- mu + phi1 * mu + phi2 * mu;
err[5] <- y[5] - nu[5] - theta[1] * err[4];
nu[6] <- mu + phi1 * mu + phi2 * mu;
err[6] <- y[6] - nu[6] - theta[1] * err[5];
nu[7] <- mu + phi1 * mu + phi2 * mu;
err[7] <- y[7] - nu[7] - theta[1] * err[6];
nu[8] <- mu + phi1 * mu + phi2 * mu;
err[8] <- y[8] - nu[8] - theta[1] * err[7];
nu[9] <- mu + phi1 * mu + phi2 * mu;
err[9] <- y[9] - nu[9] - theta[1] * err[8];
nu[10] <- mu + phi1 * mu + phi2 * mu;
err[10] <- y[10] - nu[10] - theta[1] * err[9];
nu[11] <- mu + phi1 * mu + phi2 * mu;
err[11] <- y[11] - nu[11] - theta[1] * err[10];
nu[12] <- mu + phi1 * mu + phi2 * mu;
err[12] <- y[12] - nu[12] - theta[1] * err[11];
nu[13] <- mu + phi1 * y[1] + phi2 * mu;
err[13] <- y[13] - nu[13] - theta[1] * err[12] - theta[2] * err[1];
nu[14] <- mu + phi1 * y[2] + phi2 * y[1];
err[14] <- y[14] - nu[14] - theta[1] * err[13] - theta[2] * err[2] - theta[1] * theta[2] * err[1];
 
for (t in 15:T) {
    nu[t] <- mu + phi1 * y[t-12] + phi2 * y[t-13] + theta[1] * err[t-1] + theta[2] * err[t-12] + theta[1] * theta[2] * err[t-13];
err[t] <- y[t] - nu[t];
}

mu ~ normal(0, 10); // priors
phi1 ~ normal(0, 2);
phi2 ~ normal(0, 2);
theta ~ normal(0, 2);
sigma ~ cauchy(0, 5);
err ~ normal(0, sigma); // likelihood
}




andre.p...@googlemail.com

unread,
Jul 13, 2016, 7:07:40 AM7/13/16
to Stan users mailing list
Alice,

you may ensure the stationary of your model by constraining your parameters.
There are some math software around that can do it for you. 

+ your parameters are very unspecific, eg. mu takes a large variance.

Andre

Alice Milivinti

unread,
Jul 13, 2016, 7:09:53 AM7/13/16
to Stan users mailing list

Yes, in another version I have added for the autoregressive terms:

real<lower=-1,upper=1> phi1;
real<lower=-1,upper=1> phi2;

Bob Carpenter

unread,
Jul 13, 2016, 9:22:54 AM7/13/16
to stan-...@googlegroups.com
Other than putting things in a loop that you can,
there's not a good alternative. You're doing the right
thing in constructing the errors ahead of time.

The alternative is to leave the final sigma error

y ~ normal(y_hat, sigma);

rather than using

y - y_hat ~ normal(0, sigma);

But you need to do the subtraction thing for all those
other errors unless there's some exponential family magic
that lets you simplify.

- Bob
> // nu[t] <- mu + phi1 * y[t-12] + theta[1] * err[t-1] + theta[2] * err[t-12] ;
> nu[t] <- mu + phi1 * y[t-12] + phi2 * y[t-13] + theta[1] * err[t-1] + theta[2] * err[t-12] + theta[1] * theta[2] * err[t-13];
> err[t] <- y[t] - nu[t];
> }
>
> mu ~ normal(0, 10); // priors
> phi1 ~ normal(0, 2);
> phi2 ~ normal(0, 2);
> theta ~ normal(0, 2);
> sigma ~ cauchy(0, 5);
> err ~ normal(0, sigma); // likelihood
> }
>
>
>
>
>

andre.p...@googlemail.com

unread,
Jul 13, 2016, 9:05:48 PM7/13/16
to Stan users mailing list
The weak stationary condition for an ARMA(2,1) are:

-1 < phi2 < 1
 -sqrt(1 - 2 phi2 + phi2^2) < phi1 < sqrt( 1 - 2 phi2 + phi2^2)

I'm not sure this apply to your model. 

Andre

Alice Milivinti

unread,
Jul 15, 2016, 5:35:45 AM7/15/16
to Stan users mailing list

Thanks everybody for your help.
However I still have a problem...
For each chain I get the following error, any suggestion?


The following numerical problems occured the indicated number of times after warmup on chain 1

                                                                                                                                 count
Exception thrown at line 67: normal_log: Location parameter is -inf, but must be finite!    18
Exception thrown at line 67: normal_log: Location parameter is inf, but must be finite!     17
When a numerical problem occurs, the Metropolis proposal gets rejected.
However, by design Metropolis proposals sometimes get rejected even when there are no numerical problems.
Thus, if the number in the 'count' column is small, do not ask about this message on stan-users.
Warning messages:
1: There were 500 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help.
2: Examine the pairs() plot to diagnose sampling problems



Bob Carpenter

unread,
Jul 16, 2016, 12:49:11 AM7/16/16
to stan-...@googlegroups.com
Well, you need to look at what's going on in line 67. The
means are running away to +infinity or -infinity. And you're
having general problem following the Hamiltonian (those are
the divergences). You can try increasing adapt_delta, but it
usually won't fix things this extreme. You often need either
stronger priors to keep the values at sane levels or you need
to reparameterize.

- Bob
Reply all
Reply to author
Forward
0 new messages