Has anyone created fixed kernel home ranges using Kie's (2013) ad hoc method in R??

362 views
Skip to first unread message

Anna

unread,
Nov 5, 2015, 10:07:53 AM11/5/15
to GIS In Ecology Forum
Hi all

I do most of my analyses in R. Though I am far from a wizard at it. I wanted to use the ad hoc method of calculating h for fixed kernel home ranges. But as far as I can tell the only program that supports this is Home Range Tools for ArcGIS. I don't have access to ArcGIS anymore and so I was wondering if anyone has automated this at least somewhat in R?

The AdehabitatHR packages has some good tools but they don't have Kie's ad hoc option to estimate h. I have quite a few individuals and while I could script it out manually (and very clumsily) I thought I would ask if anyone else has attempted it first.

The text on the ad hoc method from Kie 2013:

" Therefore, the reference bandwidth

(href ) was sequentially reduced in 0.10 increments (0.9 href,

0.8 href, 0.7 href, …0.1 href ). This rule-based had hoc was the

smallest increment of href that: 1) resulted in a contiguous

rather than disjoint 95% kernel home-range polygon, and

2) contained no lacuna within the home range."


Thanks

Anna

GIS in Ecology

unread,
Nov 6, 2015, 6:49:21 AM11/6/15
to GIS In Ecology Forum
Hi Anna,
 
I've not tried this in R, but the main issue I see with doing an R script for it is that it will be difficult to get R to identify when it has has created a single polygon for the 95% PVC which has no islands in it. The best way might be to using something like Adehabitat to generate your 95% PVC from your kernel density estimate for any given h value, and then visualise the resulting polygon in a free GIS software package like QGIS to assess whether it constitutes a single polygon or not (you could even visualise it in something like Google Earth if you wanted). If you use a standard name, and overwrite the old polygon file each time you run the in R code, you could produce a semi-automated way of generating your estimated home range and assessing the effect of varying the value of h in line with Kei et al. (because the polygon being generated will automatically be updated when it's overwritten).
 
I hope this is of some help, and I'm sure there are others out there who can provide more advice specific to doing this in R. If so, hopefully they will contribute to this thread.
 
All the best,
 
Colin

Jeffrey Evans

unread,
Nov 9, 2015, 12:54:05 PM11/9/15
to GIS In Ecology Forum
You may want to also take a look at the openJUMP HoRAE toolbox. It looks like they have the Kie (2013) ad hoc method available.

I would also point out the in the kernelUD function in adehabitat defaults to an ad hoc method, but likely not the Kie (2013) method. Have you looked at the output from this function? From the kernelUD help: "The default method for the estimation of the smoothing parameter is the ad hoc method, i.e. for a bivariate normal kernel h = Sigma*n^(-1/6) which supposes that the UD is bivariate normal."

Statistically, my preference is the LoCoH method, because it is based on a kNN approach which, with telemetry data seems most sensible in producing consistence results across inconsistent data. Without scaling, traditional Gaussian KDE approaches can vary significantly, across individual spatial intensities, whereas kNN weights will provide a measure of consistency in spatial process given utilization by individual animals.     

-Jeff

Jeffrey Evans

unread,
Nov 9, 2015, 12:54:05 PM11/9/15
to GIS In Ecology Forum
I just read the Kei (2013) paper and there are many assertions that are just not supported in spatial statistics or resource selection literature. It would seem like a tractable approach would be to choose the order of variation you would like to represent in the spatial process and apply one of the published bandwidth selection models. Here is a worked example in R along with a few 1st and 2nd order bandwidth methods to test. Since spatial smoothing does not necessarily represent anything ecologically meaningful, any optimization aimed towards spatial continuity should be based on Nth order variation observed spatial process. This is just an alternative approach to think about but is rooted in the spatial statistics literature. Please note that the density estimate utilized in the example is the isotropic density, which represents the expected intensity funciton and not the traditional Gaussian kernel density estimate.   

########## START CODE ##########
#  Bandwidth (sigma) selection methods
#   scott (Scott 1992), Scott's Rule for Bandwidth Selection (1st order)
#   diggle (Berman & Diggle 1989), Minimize the mean-square error via cross validation (2nd order) 
#   likelihood (Loader 1999), Maximum likelihood cross validation (2nd order)
#   geometry, Bandwidth is based on simple window geometry (1st order)
#   stoyan (Stoyan & Stoyan 1995), Based on pair-correlation function (strong 2nd order)

# Add required libraries
library(spatstat)
library(raster)
data(bei)

#### bandwidth (sigma) functions
bw.scott <- function(X) {
stopifnot(spatstat::is.ppp(X))
n <- spatstat::npoints(X)
sdx <- sqrt(stats::var(X$x))
sdy <- sqrt(stats::var(X$y))
return(c(sdx, sdy) * n^(-1/6))
}

bw.stoyan <- function(X, co = 0.15) {
stopifnot(spatstat::is.ppp(X))
n <- spatstat::npoints(X)
W <- spatstat::as.owin(X)
a <- spatstat::area.owin(W)
stoyan <- co/sqrt(5 * n/a)
return(stoyan)
}

bw.geometry <- function(X, f = 1/4) {
X <- spatstat::as.owin(X)
g <- spatstat::distcdf(X)
r <- with(g, .x)
Fr <- with(g, .y)
iopt <- min(which(Fr >= f))
return(r[iopt])
}

bw.likelihood <- function(X, srange = NULL, ns = 16) {
stopifnot(spatstat::is.ppp(X))
if (!is.null(srange))
spatstat::check.range(srange) else {
nnd <- spatstat::nndist(X)
srange <- c(min(nnd[nnd > 0]), spatstat::diameter(spatstat::as.owin(X))/2)
}
sigma <- exp(seq(log(srange[1]), log(srange[2]), length = ns))
cv <- numeric(ns)
for (i in 1:ns) {
si <- sigma[i]
lamx <- spatstat::density.ppp(X, sigma = si, at = "points", leaveoneout = TRUE)
lam <- spatstat::density.ppp(X, sigma = si)
cv[i] <- sum(log(lamx)) - spatstat::integral.im(lam)
}
result <- spatstat::bw.optim(cv, sigma, iopt = which.max(cv), criterion = "Likelihood Cross-Validation")
return(result)
}

# Create 1st and 2nd order isotropic density estimates and plot
d <- stack(raster(spatstat::density.ppp(bei, sigma = bw.diggle(bei))))
d <- addLayer(d, raster(spatstat::density.ppp(bei, sigma = bw.stoyan(bei))))
d <- addLayer(d, raster(spatstat::density.ppp(bei, sigma = bw.scott(bei))))
d <- addLayer(d, raster(spatstat::density.ppp(bei, sigma = bw.likelihood(bei))))
d <- addLayer(d, raster(spatstat::density.ppp(bei, sigma = bw.geometry(bei))))
names(d) <- c("Diggle", "Stoyan", "Likelihood", "Scott", "Geometry")
plot(d)
########## END CODE ##########

References

Berman, M. and Diggle, P. (1989) Estimating weighted integrals of the second-order intensity of a spatial point process. Journal of the Royal Statistical Society, series B 51, 81-92.

Loader, C. (1999)Local Regression and Likelihood. Springer, New York.

Scott, D.W. (1992) Multivariate Density Estimation. Theory, Practice and Visualization. New York, Wiley.

Stoyan, D. and Stoyan, H. (1995) Fractals, random shapes and point fields: methods of geometrical statistics. John Wiley and Sons.


Best,

Jeff


On Thursday, November 5, 2015 at 8:07:53 AM UTC-7, Anna wrote:
Reply all
Reply to author
Forward
0 new messages