Interpretation of fitted values LGCP

22 views
Skip to first unread message

CB

unread,
Apr 24, 2026, 12:03:34 AM (4 days ago) Apr 24
to R-inla discussion group
Hello,

I just wanted to clarify my understanding on interpretation of fitted values for a spatial LGCP model using SPDE approach. This is a pretty basic question, but I want to make sure I have details straight for the sake of giving clear interpretations of mapped model output for non-technical audiences.

- Should the fitted values (after exponentiating) be interpreted as the spatial intensity in terms of spatial coordinates used? (For example, if the coordinates for my data are in kilometers, by default would the fitted values represent 'number of points per square kilometer' at each location?)

- If including a prediction stack and predicting onto a grid with a larger resolution (for example, grid cells of 5 sq km), is the interpretation still in terms of sq km? (i.e. number of points per sq km, but shown over a 5 sq km area)? Would you need to multiply by the cell area to get number of points per 5 sq km? (Similar to what you'd do if using a gridded lgcp approach?)

Thank you.

Finn Lindgren

unread,
Apr 24, 2026, 4:45:35 AM (3 days ago) Apr 24
to CB, R-inla discussion group
Hi,

yes, if the model is setup correctly, the intensity estimates are in "length units squared" for whatever length units the mesh is defined in. So if you have a projection CRS with units=km the result is in "per sq km".

The tricky part can be for line transect sampling, where one needs to ensure that the transect width is handled correctly. In our examples, we use an explicit cross-transect distance dimension which takes care of that in the likelihood construction.
For ordinary "fully observed over a polygon" observations no special handling is needed.

For your prediction question, it depends on how you set it up; if you just evaluate at points  km apart, the units are unchanged. There is no automatic aggregation conversion done; the plain code doesn't treat grid cells as cells, but rather as points; multiplying with the cell area is a basic approximation of actual aggregation; as long as the intensity is still smooth within each cell it's an ok approximation.
With inlabru predict() calls, it's is more doable to do more accurate "integrate over cells" prediction, by using finer integration schemes, via fm_int(fm_subdivide(mesh, ...), ...). This is needed in cases where covariates have finer resolution than the mesh.

I would strongly encourage you to use the inlabru interface instead of plain inla and  prediction stacks. See the several Point Process examples at https://inlabru-org.github.io/inlabru/ which includes e.g. how to do posterior evaluation of total region counts etc.
Your code will both become much shorter and easier to maintain, but will also be benefit from the improvements we've done over the years in how point process data is handled.

Finn


--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/r-inla-discussion-group/a8753e63-b726-47af-b6f8-ef9180fed7aan%40googlegroups.com.


--
Finn Lindgren
email: finn.l...@gmail.com

CB

unread,
Apr 26, 2026, 8:30:31 PM (16 hours ago) Apr 26
to R-inla discussion group
Thank you very much for your helpful clarification and advice, it is much appreciated!
Reply all
Reply to author
Forward
0 new messages