Hi Chris,
I’m a master’s student hoping to use ctmm for home range analysis in my thesis project. I am interested in comparing the foraging home range size of several (14) different birds in different locations. The data was collected using an “encounternet” system, where an array of radio receivers tracks the tag signal on a bird and then each location fix is calculated from an algorithm afterwards. My data is more fine scale (5 second interval) than examples I’ve seen, so I’m wondering if the package is still appropriate for my data. Any guidance you could provide would be greatly appreciated. I have one additional question from what I’ve attempted so far:
Thank you for your time!
Cheers,
Jamie
Hi Chris,
Thank you for taking the time to respond to my questions. I tried to implement some of your suggestions and have a couple follow-up questions:
> summary(fitted.mods)
dAICc DOF[mean]OU anisotropic error 0.00000 37.88536OU isotropic error 95.55237 37.93349OUF isotropic error 97.56740 37.74227
> summary(OU)
$DOF
mean area
37.885361 2.540275
$CI
low ML high
area (hectares) 0.3198444 1.885281 4.811067
tau position (minutes) 0.0000000 3.190898 35.127284
error (meters) 11.2630561 15.289443 19.308351
I then I took the ML error value from the summary and then re-ran ctmm.select with GUESS$error= 15. When I try to plot the resulting OU model, I get some errors, and an odd looking plot. However my akde model runs no problem and seems to give a reasonable result:
Warning messages:1: In stats::qchisq(Alpha/2, k, lower.tail = TRUE) : NaNs produced2: In stats::qchisq(Alpha/2, k, lower.tail = TRUE) : NaNs produced3: In stats::qchisq(Alpha/2, k, lower.tail = TRUE) : NaNs produced
I’m wondering if my approach here is correct, or if I’ve done something wrong to cause the graph errors. I sense that my problem is having a relatively high error for a small area, and no actual corresponding errors to correct with.
Cheers,
Jamie
Hey Chris,
I think that the akde model I ran before must not have taken the properly fitted OU model into account because the CIs on the range estimation were so small. It also returns an error when I try to do summary(OU). This is the same bird on a different day with res=5, error=18.6 that did not return an error message, and it seems to have more appropriate CIs:
So it appears I’m getting errors in only some subsets of the data, but for the same res and error values. Here is my code and error message in case I’m doing something wrong there. I will try to re-install the package as well, in case it's something you've fixed but is not updated on my version. For the time being, I think fitting with the OU model without errors is at least better than not correcting any autocorrelation. Hopefully my explanation makes sense, thanks again for taking the time to look at this.
> telemetry1 <- as.telemetry(data1, timeformat = "%Y-%m-%d %H:%M:%S",
+ timezone = "CEST", projection = CRS("+proj=utm +zone=31 +datum=WGS84"))
Maximum speed of 15.2 m/s observed in 011017707C
Minimum sampling interval of 5 seconds in 011017707C
> vg.1<-variogram(telemetry1, res=5)
> GUESS <- variogram.fit(vg.1, interactive = FALSE)
> GUESS$error=18.6
> fitted.mods<-ctmm.select(telemetry1,CTMM=GUESS, verbose=TRUE,level=1)
Error in if (Q <= 0) { : missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In sqrt(COV) : NaNs produced
2: In sqrt(COV/tau^4) : NaNs produced
3: In min(x) : no non-missing arguments to min; returning Inf
4: In max(x) : no non-missing arguments to max; returning -Inf
5: In sqrt(CTMM$COV[Q, Q]) : NaNs produced
Cheers,
Jamie
Hey Chris,
Thanks again! I was experimenting with fitting different models yesterday and was able to set error=TRUE, but today I get the following warning:
Warning message:
In cov.loglike(hess, grad) : MLE is near a boundary or optim failed.
It seems to still work when I assign a set error value of 9.68 (which I got from my calibration data and the uere function). The fit still seems good, but I had originally wanted to just set error=TRUE because error variation between my datasets seems high (different “nugget effects”). But it maybe it is better to just use the constant value. Also in terms of the projection, would a local projection such as Amersfoort New also provide more reliable results than UTM?
+proj=sterea +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000 +y_0=463000 +ellps=bessel +towgs84=565.4171,50.3319,465.5524,1.9342,-1.6677,9.1019,4.0725 +units=m +no_defs
I haven’t worked with a custom projection before, and I’d like to keep the same projection for all my datasets.
Cheers,
Jamie
Hey Chris,
Thanks so much for the detailed explanation. I just want to make sure I’m interpreting and using the package correctly. When I look at the summary of my model which returned a warning, I don’t see any parameters that are 0 (area, tau, error, velocity). Though maybe I am getting the summary of the wrong thing? So far, I don’t get any warnings when I specific error=10, so maybe it’s just able to find a better fit with the initial guess rather than just error=TRUE?
It looks like most of the models are finding a decent fit by looking at the variograms (with a couple maybe having too high of an asymptote). I am curious though, because I am using ctmm.select to find the best model, sometimes it selects OU and sometimes its selects OUF. It seems that the OUF models tend to have a smaller area than OU, and I’m worried about this making my analysis inconsistent. Is there a way to just find a best fit for one type of model (only OU?). In general, my home range estimates seem to be quite conservative in relation to the point locations and quite small confidence intervals. I was just wondering if you think this makes sense and is all working correctly.
I’ve attached some of the output form my analysis. Each file is from one bird but multiple days (fitted models, home range plots, and model summaries). Thanks again for your help, and hopefully all these questions are also of use to you smoothing out the package!
Cheers,
Jamie
Regarding the spread of the data, if your home ranges are 50-70 meters across and your telemetry error is 10-15 meters, then a fair bit less than 95% of the data will fall within the 95% contours of the location distribution because the telemetry error is a substantial contribution to the variance of the data.
low ML high
1779.107 2272.111 2824.495
3360.855 4343.486 5450.212
1816.775 2520.287 3337.144
Hey Chris,