Hi Dr. Flemming et al. --
I'm hoping to ask a couple questions... Thanks in advance!
I'd like to know if you've run multiple ctmm's on multiple animals/collars at once? I'm not certain how that would work with having to store the "GUESS" estimates? Would you recommend doing it manually?
Secondly, what would you suggest as the best way to add my GPS collar error to run a 'final' akde? I have both GPS error and DOP for most of my locations/collar deployments. I noticed that DOP is used quite often and it can be automatically incorporated in the model, but only if using Movebank, correct? My data are not yet on MoveBank. Any recommendations here would be great?
Lastly, I working with identifying an objective method to set
the dt.max parameter? My OD's are very fragmented (hardly able to identify corridors) unless I set this value to 1500-3000 which is up to 40-50
times my average fix rate, which seems inappropriate (I have a flexible fix schedule with both 15 minute and 60
minutes fixes depending on where the animal is located). Could 'tau' (HR crossing time) be used as a statistically
independent time between locations for a reasonable dt.max value?
Much appreciated.
Nick
Hi Dr. Fleming --
Thanks for the help on the previous post!
I have a few follow up questions below.
Thanks very much in advance.
Sincerely,
Nick
1. My akde is taking a long time to run (hours on some data) to no avail. It looks like both maximizing the likelihood and
calculating the covariance both take between 7-10 minutes per animal
(14-20 minutes of processing time), but the akde is the real time consumer. Do you have any thoughts on this?
2. I cannot get my DOP to be recognized in my 'as.telemetry command.
It looks like the vignette is fairly straight forward from this point.
Any chance you could look at the attached screenshot of my code for
'as.telemetry'?
3. Also, I have attached a screenshot of one of the akde's for a male bear (vhf only). Does it seem as though the 95% UD is
too far from the points, given this animal was followed for ~ a year? I
know that the furthest point is from the
upper 95% CI for the akde (not the mean estimate), but still it seems fairly
inflated? Do you have thoughts or suggestions?
Hi Dr. Flemming --
This is great, thanks very much for the detailed help. I'm amazed at the quality of time you put in to 'our' questions! I'm hoping to clarify a bit with some of your responses below. I'll follow up using all CAPS.
I'm also a bit curious, with all the new models coming out in the
ctmm package would you suggest waiting a bit before conducting space use
analyses since there is so much rapid development? I suspect not, but
just thought I'd get your opinion.
Thanks much again.
Hi Nick,
1. It looks like you are using weights=TRUE in akde(). Slowdown is very likely because you have a minimum time difference of 2 minutes in the data you gave me. I am adding notes to help("bandwidth") on this---for weights=TRUE the minimum time difference is the default discretization resolution, so for your data the fast default algorithm scales like O(7.5 months/2 minutes) or O(164,000), as slow as having evenly sampled 164k locations. After fitting a model with error, I bumped dt up to 15 minutes with the data you sent me and it was very fast and looked good with trace=TRUE. Model fitting with error is much slower though.
I will also add that if you have large effective sample sizes like you do (~100) weights aren't really going to help unless the sample schedule is jacked up. Half way though this dataset, something weird does happen to your sampling schedule though...
2. From your script, it looks like you are taking some non-Movebank file and then converting it to a Move move object and then converting that to a ctmm telemetry object. However, the move object you are creating doesn't have the DOP column in it, so there's no way for it to carry over. I would suggest formatting your data to the Movebank standard and then converting that to a telemetry object instead, though there is probably some way to do this with move objects as well.
I fit your data with the DOP column and error=TRUE and the AIC dropped by 250. The initial part of the variogram now matches the model much better too.
3. Two things: (A) When you see AKDEs go well beyond the data like this, its saying that observed data isn't very exhaustive and future space us is expected to go well beyond the data. These bigger areas do cross validate better than narrower KDE areas, even though they lack much precision as you can see from the wide CIs. Mike Noonan is wrapping up a big multi-species comparison that demonstrates this on real data. (B) If this phenomena sticks out from the other data, your male bear probably dispersed at some point in the data, making this some combination of home range and dispersal range. You might consider segmenting the data to isolate range resident periods. In the future we will have models that help make this segmentation objective.
Also, even with the bear you sent me, it looked like the range slowly drifted in time as the variogram was gradually creeping upwards instead of leveling off. We will have better models for this eventually.
4. I'm not sure I understand the question, but you can fit models to VHF data. Often the result is OU or IID. Far in the future, we will have hierarchical fitting on populations, but for now I don't see what that would give you.
5. The basic models in ctmm are stationary, range resident models. If you put in pure home-range data, it gives you the ordinary home range, which is making roughly daily predictions for your bears. If you put in pure dispersal data, it gives you a dispersal range, which is a prediction over repeated dispersals (multiple years, perhaps, for male bears). If you put in a non-stationary mixture of behaviors, then you are getting out some kind of average, which may or may not make sense for your analysis. In the future, we will have more complex models, but for now this is a single behavior fit, and you need to segment the data if necessary.
Thanks Dr. Flemming, I appreciate the thorough response. Very helpful.
Nick
Hi Dr. Flemming --
I have put my data into movebank to make things a little easier (for me anyway). However, I'm concerned that the error model is not running correctly? It is the top model in most of the preliminary runs I've done so far, but I'm a little concerned that
it's not reading the DOP column, but using homoscedastic errors instead.
Could you please provide some insight here?
Thanks much.
To unsubscribe from this group and all its topics, send an email to ctmm-user+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ctmm-user/4558fea2-dc5b-4873-9efc-6bcbca7654c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to a topic in the Google Groups "ctmm R user group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ctmm-user/IjwEU6x1MSI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ctmm-user+unsubscribe@googlegroups.com.
Thanks Chris, I appreciate the help.
Nick
Hi Dr. Flemming --
I have put my data into movebank to make things a little easier (for me anyway). However, I'm concerned that the error model is not running correctly? It is the top model in most of the preliminary runs I've done so far, but I'm a little concerned that it's not reading the DOP column, but using homoscedastic errors instead.
Could you please provide some insight here?
Thanks much.
Nick
From: ctmm...@googlegroups.com <ctmm...@googlegroups.com> on behalf of Nicholas Gould <ngo...@msn.com>
Sent: Thursday, May 18, 2017 8:12 AM
To: Christen Fleming; ctmm R user group
Subject: Re: Multiple ctmm's, error in akde, and OD estimates
Thanks Dr. Flemming, I appreciate the thorough response. Very helpful.
Nick
Hi Chris --
I am making comparisons to some historical VHF telemetry data for black bears and wanted to get your thoughts on using an error model for these data? Nearly all of my GPS data performed best under the error model (>90% of the time) using the DOP values, and I'd like to keep the movement models as similar as possible for the comparisons. Do you suggest using homoscedastic error in the model for these data? I am hesitant to do this because these are bears and by nature they do not spend much time out in the open. Is there a way to use an averaged telemetry error value (e.g., 180 m) in the model? Is it necessary to use an error model?
Any thoughts would be much appreciated.
Thanks.
Nick
Hi Chris --
Thanks a lot, I appreciate the response and guidance.
So, do you feel as though comparisons can be made (and justified) between these two data sets -- a data set using error in the models and one that does not use error in the models (i.e., the VHF data set I am comparing to my GPS data)?
Thanks again.
Nick