I’m trying to fit spatial models to several 3-year, repeated measures data sets of sapling survival (Bernoulli GLMMs) and growth (Gamma GLMMs). There are c. 250 observations per year. Year is included as a fixed effect (along with other covariates) and a random intercept (1|Individual) addresses the repeated measures.
The study areas are c. 60 x 270m. Variograms were uninformative in terms of diagnosing the range, but ~10-50m would seem plausible.
Given the uncertainty, I built three meshes tuned to 10-50m (fine), 50-100m (medium), and 100-150m range (coarse), respectively, and I conducted a sensitivity analysis using different meshes and grids of range priors (fine mesh: 10,30,50m; medium mesh: 50, 75,100m; and coarse mesh: 100,125,150m) and sigma priors (log(2), log(3)), along with different spatial random field structures (single vs replicated or exchangeable fields grouped by Year). I compared fixed effects stability, WAIC, LCPO and peff across each of the mesh/prior combinations and in relation to a baseline GLMM without an SRF.
In several cases, models with replicated or exchangeable SRFs improved WAIC by ~15-30 relative to the non-spatial baseline model and to models with a single SRF; however, the posterior range was strongly prior-driven and poorly identified (i.e., increased with the prior and spanned a significant portion of, or exceeded, one or more dimensions of the study area).
In such cases, is the correct interpretation that the replicated/exchangeable SRF isn’t modelling true spatial autocorrelation, but is capturing broad-scale residual variation that’s temporally structured and that’s what’s improving prediction?
If so, is a spatial GLMM still an appropriate way to model these data (noting the very short length of the time series)?
Thanks,
Berin