Hi all,
I've found a recurring issue that has left me stumped - I need your help!
I've been modelling occupancy and then using the predict() function using 600 randomly generated points with the corresponding occurrence covariate data used in my models.
I have done this for multiple seasons with single season models, and am now trying a multi-season model.
I have around 35 camera trap sites for the input data, with the occurrence covariate info extracted from my environmental rasters. I have then been using the predict() function as I want to generated a map of what the modelled occupancy for a given season looks like for our camera trap grids. To avoid patches in the raster from just using the 25 camera trap locations, I have randomly generated 600 points within the grid and extracted the occurrence covariate info for each of these points, which is used as the x.0 file for the predict() function. The prediction occurrence covariates fed into the predict function are exactly the same for each season, the only difference is the occupancy model (which then has some difference camera trapping sites (not all are present in every single dataset - some didn't work etc), different detection histories, etc).
My issue is that when modelled occupancy is modelled as be the lowest (e.g. 0.47 for Spring 2024, compared with 0.65 for Summer 2024), when I then put these models into the predict() function, the resulting prediction raster I get indicates the highest occupancy (e.g. 0.8) for the season that was modelled as having the lowest occupancy, and predicts lower occupancy for the seasons that were modelled as having a higher average occupancy. This has happened with 2 separate camera trap grids now, and I am totally stumped.
Why might this be? Has anyone else had this issue?
I am happy to provide code if needed - just let me know what specifically you might want to see.
All the best,
Jamie