On Thu, Jul 24, 2014 at 5:13 AM, Finn Lindgren <
f.lin...@bath.ac.uk>
wrote:
>>> obtain the
>>> posterior predictive distribution of the observation model,
>> exactly, I need the PPD, more specifically the p(y_{iT} |
>> y_{i1},...,y_{i,T-1}), marginalizing out the parameter and
>> hyperparameter.
>
> OK, then the sampling method should work; just add the generalised
> linear model part for y_{iT} to the model, with NA "data" to get the
> posterior for the (transformed) linear predictor , \eta_{iT} in
> "inla-speak", so that inla.posterior.sample gives you samples of
> that, and then sample Poisson variables based on that. You may need
> to specify link=1 to get the transformed predictor (search the FAQ
> for example) but I'm not sure if that affects the sampler or not.
I've read some code and obtained the PPD like this:
- Extract a subset data up to time point T, set Y_T <- NA
- Fitted a Poisson model with `control.predictor = list(compute = TRUE)`
- Transform the predicted fitted marginal for time point *T* with
Mtrans <- inla.tmarginal(function(x) exp(x),
fit$marginals.fitted.values[[T]])
- Sampling from the above
Msamp <- inla.rmarginal(n = 100, Mtrans)
- Sampling again as Poisson to obtain PPD for time point *T*
newY <- rpois(n = 10, lambda = E * Msamp) #as I have the population
as risk E
The procedure described above run within a loop for the whole range of
time index. Is this somehow equivalent with what you described?
> If the expected Poisson counds are large enough, it's possible to
> derive approximate posterior distributions without sampling (using
> various Normal approximation properties), but to my knowledge noone
> has done that with inla yet.
>
> Finn