Hi,
I have a similar question about predictive distributions/ sampling.
First off, I have a more general question: in a setting where I have a Gaussian/ T distribution,
my understanding is I can access the estimated precision parameter using:
samples = inla.posterior.sample(n, model, num.threads= <>)
Then this gives me the estimated precision:
samples[[i]]$hyperpar[1]
I guess I was wishing to have different posterior predictive distribution to have different variances, but it seems that I will get the same precision for every observation.
It would be desirable to have different distributions/ variances, is there a way to implement this? Or, am I misunderstanding and are the precisions not the same across observations?
Finn, you mention that you use posterior sampling. I am still not quite sure how to go about getting the posterior predictive distribution.
Is it correct to say that functions like smarginal() dmarginal(), etc. are not reflective of the true predictive distribution, but rather the distribution of the linear predictor?
I have read on this site and Julian Faraways book about using an extra random effect to estimate these distributions, is that the preferred method?
Or, it seems like there is an alternative method - what do you mean by using the average of samples of dnorm() ?
That was a lot of questions - let me know your thoughts!
Thanks,
Kyle