change of support / moving average /kernel smoothing

99 views
Skip to first unread message

John haslett

unread,
Apr 22, 2014, 8:43:34 AM4/22/14
to r-inla-disc...@googlegroups.com
Hello everybody

Is the following a straight-forward extension?

Underlying Gaussian process Y(s);
Observations z(s_i)  being - in the simplest case - an average of Y(s) in a region centred on s_i; that is the 'support' of the observations is a region.

More generally the observations might be 'kernel-smoothed' versions of Y(s)

John Haslett


Elias T Krainski

unread,
Apr 22, 2014, 8:50:24 AM4/22/14
to r-inla-disc...@googlegroups.com
--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at http://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

Finn Lindgren

unread,
Apr 22, 2014, 8:53:39 AM4/22/14
to John haslett, r-inla-disc...@googlegroups.com
On 22 Apr 2014, at 10:43, John haslett <haslet...@gmail.com> wrote:
Is the following a straight-forward extension?

Underlying Gaussian process Y(s);
Observations z(s_i)  being - in the simplest case - an average of Y(s) in a region centred on s_i; that is the 'support' of the observations is a region.

Yes, for Gaussian observation likelihoods this is straightforward when using the spde models.  There is support in inla.spde.make.A() for numerical integration schemes for cases where you don't already have the integral mapping between your observation regions and the triangle mesh.(see the "block" parameter and others)

For non-Gaussian observations is not straightforward, unless the regional aggregation happens on the linear predictor scale.

More generally the observations might be 'kernel-smoothed' versions of Y(s)

For compactly supported kernels that is also fine. For non-compactly supported kernels you will break the sparsity of the posterior precision, so in that case you'd be limited to small models.

Finn

INLA help

unread,
Apr 22, 2014, 8:55:43 AM4/22/14
to John haslett, r-inla-disc...@googlegroups.com
yes, that is ok. how to actually implement this depends on the model.

there is a simple example in the lecture-notes

www.math.ntnu.no/~hrue/INLA/lectures-part2.pdf

at the end; look for
'A-matrix in the linear predictor'

This is what you're looking for, right?

H

--
Håvard Rue
he...@r-inla.org

Finn Lindgren

unread,
Apr 22, 2014, 8:56:15 AM4/22/14
to Elias T Krainski, r-inla-disc...@googlegroups.com
On 22 Apr 2014, at 10:50, Elias T Krainski <eliask...@gmail.com> wrote:

Yes, that is a non-Gaussian observation likelihood.

Finn
Reply all
Reply to author
Forward
0 new messages