data entry

1,088 views
Skip to first unread message

msfarhad...@gmail.com

unread,
Apr 28, 2017, 6:32:10 AM4/28/17
to ctmm R user group
Hi
I have just started to use ctmm package and I need some advice how to enter data. Based on the paper (MEE), at least 4 columns should be there, but in buffalo example, there are two columns referring to lat and long. These are my questions:
1. What should be exact name of columns?
2. Does Timestamp column have to include date as well? Is there any special format for this column?
3. What is the format for lat/long? In buffalo example, two different format can be seen.
4. I have six individual collar data, do I need to analyze each individually for AKDE and KDE?
Thanks
Mohammad

Christen Fleming

unread,
Apr 28, 2017, 4:24:41 PM4/28/17
to ctmm R user group, msfarhad...@gmail.com
Hi Mohammad,

ctmm uses the Movebank format. See help("as.telemetry") in R. I highly recommend that get your data through Movebank to make sure that it is formatted correctly and to archive it long term for the benefit of science. You can keep your data private if you want.

Timestamps are interpreted by strptime as per help("as.telemetry"). If strptime can't interpret your timestamps, there is a timeformat argument to assist it.

Lat-lon will be projected by as.telemetry. If no projection is specified, a generally safe projection will be chosen for you.

You do have to analyze the individuals separately. If your data have no special needs, much of this can be automated. A GUI is being developed here https://github.com/ctmm-initiative/ctmm-webapp but it is not yet complete.

Best,
Chris

msfarhad...@gmail.com

unread,
May 1, 2017, 8:32:48 AM5/1/17
to ctmm R user group, msfarhad...@gmail.com


On Friday, April 28, 2017 at 9:24:41 PM UTC+1, Christen Fleming wrote:
Hi Mohammad,

ctmm uses the Movebank format. See help("as.telemetry") in R. I highly recommend that get your data through Movebank to make sure that it is formatted correctly and to archive it long term for the benefit of science. You can keep your data private if you want.

Timestamps are interpreted by strptime as per help("as.telemetry"). If strptime can't interpret your timestamps, there is a timeformat argument to assist it.

Lat-lon will be projected by as.telemetry. If no projection is specified, a generally safe projection will be chosen for you.
You do have to analyze the individuals separately. If your data have no special needs, much of this can be automated. A GUI is being developed here https://github.com/ctmm-initiative/ctmm-webapp but it is not yet complete.

Best,
Chris


Thanks Chris
I uploaded my data to Movebank, then download it to make sure its format is adjusted.
When I plot my data, the below plot is produced. 

and when I tried to build variogram, the below error was obtaind:

> vg.cilla <- variogram(F5)
Error in `[.data.frame`(data.frame(data), , axes) : 
  undefined columns selected

Then, I went to download Kruger Buffalo data directly from Movebank. When I plotted it, the below plot was produced:



Still struggling how to import data.
I have emailed my data to you so you can have a better look at it.
Thanks
Mohammad



Christen Fleming

unread,
May 1, 2017, 9:42:32 AM5/1/17
to ctmm R user group, msfarhad...@gmail.com
Hi Mohammad,

That first plot looks like a plot of the data.frame object (perhaps imported by read.csv?). From the Movebank CSV format, you have to cast the object as a 'telemetry' object. This does a couple of things: It makes sure the data are projected into an x-y coordinate system, it translates various device labels into something universal for the package functions to understand, it checks for a few types of common errors, etc..

DATA <- as.telemetry("../DATA/Mohammad Farhadinia/Persian leopard Tandoureh Iran.csv")
plot(DATA) # two clusters of locations 1000km away from each other
plot(DATA[DATA$y > -600*1000,]) # calibration data on a road or something
plot(DATA[DATA$y < -600*1000,]) # something that looks like animal movement
# I would split this data up before projecting it

Tell me if the vignette could be more clear

vignette("variogram")
vignette("akde")

Best,
Chris

Mohammad Farhadinia

unread,
May 4, 2017, 11:28:15 AM5/4/17
to ctmm R user group
Thanks, that problem was solved.
Now there is a new challenge:
I am following steps mentioned in your paper to work on my data, but when I try to fit a model, only 1-2 movement models are shown in summary (fitted.mods).
Then I started to see if the same happens with buffalo data as below:

#Load example buffalo data
data("buffalo")
#Extract data for buffalo 1, Cilla
cilla<- buffalo[[1]]
#Plot the positions
plot(cilla)
vg.cilla <- variogram(cilla)
#Plot up to 50% of the maximum lag in the data
plot(vg.cilla)
#Zoom in on the shortest lags
plot(vg.cilla, fraction=0.005)

#The default choices are usually acceptable.
variogram.fit(vg.cilla)

fitted.mods <- ctmm.select(cilla, CTMM=GUESS,
                           verbose=TRUE)
summary(fitted.mods)

The final result I got from summary(fitted.mods) is as below:
   dAICc DOF[mean]
OUF anisotropic     0  13.16465

While according to your paper, at least 6 models should be shown in the summary.
Can you tell me where I have made a mistake?
Thanks, Mohammad

Christen Fleming

unread,
May 4, 2017, 3:07:59 PM5/4/17
to ctmm R user group
Hi Mohammad,

Going into the next version of the package we have too many potential models to attempt every one.
ctmm.select is now more intelligent about what models are considered and will look at nearby models to see what is likely... working from the most complex autocorrelation model to the least complex autocorrelation model. The models are nested, so this should be safe.
There is a threshold value (level=0.99 by default) that you can set to level=1 to consider all nearby models. AIC selection corresponds to ~0.85, so the 0.99 default should be very safe as well.
You can also fit more models if you want and concatenate them into a list of models for summary().
Please contact me if ctmm.select fails to fit the lowest AICc model.

Best,
Chris

Mohammad Farhadinia

unread,
May 4, 2017, 5:05:35 PM5/4/17
to ctmm R user group
Hi
My question is still there. Why using the same codes I am getting only one model using ctmm.select (in your paper you got 6 models)?
and then how I can set the level to 1 and how I can fit different models to my data using ctmm.select?
Best

Mohammad

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Mohammad Farhadinia

unread,
May 5, 2017, 9:19:55 AM5/5/17
to ctmm R user group





Dear Christen
I think the problem was solved to some extent.
So, for my collar data, I run ctmm.select and three models of ou, ouf and iid have been resulted.
problem 1: model data are not fit well on variogram, as below:






Why none of them do not fit well?

problem 2: I have plotted HR, can they be exported to GIS? This is the plotted home range for three different models.







Thanks for your help

Mohammad

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:
test.bmp

Christen Fleming

unread,
May 5, 2017, 2:35:35 PM5/5/17
to ctmm R user group
Hi Mohammad,

See the help file on ctmm.select with help("ctmm.select") on how to use the level option. Tell me if the help file can be clearer.

From what I can see in your varigorams, the OU/OUF models are a definite improvement over the assumption of independence (IID), which is what is behind regular KDE and MCP. It looks like you have the initial few lags nailed and then better resolution of the turnover where the IID model has a hinge that juts out. The uncertainty looks to be much better captured too.
As for remaining discrepancy with the data, I don't know that I've ever seen a variogram kink like that. Large carnivores usually have a pretty complicated suite of movement behaviors and these models are all fairly simple, though very useful at characterizing the basic features.

For exporting see help("export") and tell me if you need any other formats supported.

Best,
Chris

Mohammad Farhadinia

unread,
May 8, 2017, 11:44:00 AM5/8/17
to ctmm R user group
Hi Christen
Most of the problems are now solved. Two more questions:
1. In one collar data, it looks that the animal is dispersing (no range residency). Based on AIC scores, ouf scored better while you have mentioned in your paper that ou would be better when data lack range residency. What is your advice here?
2.  For each model, how I can find parameters such as crossing time, velocity, etc.?
Thanks

Mohammad

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 8, 2017, 4:15:25 PM5/8/17
to ctmm R user group
Hi Mohammad,

1. Its OUF -> IOU when you lose the range residency condition. OU and OUF are both range resident. If an individual looks to be dispersing, you can try to segment the data into resident and dispersal phases, and estimate both home ranges and dispersal ranges separately. In the near future we will have objective ways to do this segmentation.

2. You can run the summary command on the model fit objects. I would recommend the vignette('variogram') and vignette('akde') for a basic overview.

Best,
Chris

Mohammad Farhadinia

unread,
May 10, 2017, 8:05:46 AM5/10/17
to ctmm R user group
Hi Christen
I am using the below codes to estimate model parameters (crossing time, velocity, etc).

#using the initial parameter values
variogram.fit(vg.Borna)
fitted.mods <- ctmm.select(Borna, CTMM=GUESS,verbose=TRUE, level=1)
summary (fitted.mods)

#Extract the fitted anisotropic version of OU, and OUF.
ou <- fitted.mods [[1]]
ouf <- fitted.mods [[2]]


BUT, results look very strange:

> summary(ouf)
$DOF
    mean     area 
150.9119 399.8980 

$CI
                                low           ML         high
area (square kilometers) 184.413009 2.039112e+02    224.37521
tau position (days)        1.087903 1.209006e+00      1.34359
tau velocity (seconds)     0.000000 1.581533e-02     24.52188
speed (kilometers/day)     0.000000 1.011324e+04 398224.76850

> summary(ou)
$DOF
    mean     area 
150.6674 298.2717 

$CI
                                low         ML       high
area (square kilometers) 181.700394 204.221510 228.039312
tau position (days)        1.068653   1.210988   1.372281

For example, crossing time in both OU and OUF models is 1.2 days. The same for speed as the estimated figure is impossible.

My second question, how does ctmm package deal with errors in GPS data (both location error and unsuccessful attempts). I went through telemetry error in help, but it is not clear what is the next step after ranking error vs. nonerror models using AIC. How we should move toward estimating parameters and AKDE afterward? These are my codes:

# default model guess
GUESS <- ctmm.guess(Bardia,interactive=FALSE)
# first fit without telemetry error
FITS <- list()
FITS$NOERR <- ctmm.fit(Bardia,GUESS)
# second fit based on first with telemetry error
GUESS <- FITS$NOERR
GUESS$error <- TRUE
FITS$ERROR <- ctmm.fit(Bardia,GUESS)
# model improvement
summary(FITS)


Thanks, your input is always very helpful.

Mohammad





On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 10, 2017, 9:31:01 AM5/10/17
to ctmm R user group
Hi Mohammad,

These OU/OUF results look good to me. This data are probably very coarse (and/or the telemetry errors are very large) and speed is not able to be well estimated. You can see that the confidence intervals on the speed estimate include the truth because they run down to zero.
The CIs on the velocity autocorrelation timescale are not as good, but the MLE is near a boundary and likelihood does not work so well at estimating parameters on/near boundaries, particularly with regards to estimating parameter uncertainty. I could devote more attention to try to get better CIs here, but this will never be the selected model, so it would not be so fruitful.

As for errors, to do a good job at this you need a DOP or HDOP column to import. as.telemetry() will detect this column with standard MoveBank name and complain about a missing UERE, which you can supply from calibration data (see help('uere')) or estimate simultaneously with the movement model. A bit worse than this would be if you only have the number of satellites on each fix. I have a model for that and could code it into the package within a day or so if you need. Much worse is to have no error information, in which case turning on the "error" argument assumes homoscedastic errors, which is not always a great assumption, but is better than nothing.

Once you have a model fit with error=TRUE, you can use it as any other. AKDE will automatically smooth the errors (reducing area inflation) and use the improved parameter estimates.

I would add that including error should increase the velocity autocorrelation estimate, so in your case it could influence whether you select between OU and OUF.

Best,
Chris

Mohammad Farhadinia

unread,
May 10, 2017, 1:06:01 PM5/10/17
to ctmm R user group
Hi Christen
Thanks for prompt reply.
Does crossing time (as denoted by tau position day/hour) mean amount of time each individual needs to cross its range to from one side to the other side? It does not mean time needed to patrol boundaries of its HR?
There is a huge variation in my data as below, does everything look OK?

Tau (crossing time) ID KDE AKDE
4.4 day Borzou/M1 418 563.4 (448.8-690.8)
8.2 hour Bardia/M2 43.6 43.9 (41.2-46.7)
1.2 day Borna/M3 194 206.6 (183.8-230.7)
13.4 hour Tandoureh/M4 56.8 59.8 (54.0-65.8)
2.5 day Iran/F5 423 330.9 (208.9-480.6)
27 day Kaveh/M6 752 2269.0 (1262.4-3565.9)

Apart from tau, in some individuals, KDE is larger than AKDE, and vice verca. Is it OK?

Something about crossing time which makes me confused is that in your buffalo example on variogram and model selection help page, you have mentioned

"20 days is also, roughly, the time it takes for the buffalo to cross its home range."
Then, when you present results of ctmm.fit models, tau is not 20 days.

## tau position (days)        3.505292   5.972745  10.17709
Can you clarify that?


Also, for errors, I have both DOP and number of satellites for each fix, but I have not fixed my GPS data during calibration. So, I presume I need to use other methods such as  BJ0RNERAAS, et al (2010) J.Wildlife Mgmt for correcting errors in my data.


Best
Mohammad

I presume the model results I sent you are biologically non-reasonable. For example, a tau position (crossing time) of 1.2 for a leopard is not meaningful (only 1.2 is not enough for a leopard to patrol its entire HR).


On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 11, 2017, 6:01:39 PM5/11/17
to ctmm R user group
Hi Mohammad,

The timescale "tau position" estimated in the model is roughly speaking the home-range crossing time. For the buffalo, the variogram took roughly 20 days to be mostly flat, which should be a couple times that (but in the same order of magnitude). I will update the vignette to be more accurate in that statement.

Crossing from one side to the other and patrolling all the way around is not exactly the same thing. We have models coming out that cover cases more like those. You can take a look at unfinished vignette('periodogram') in the 0.4.0 beta on GitHub and packaged on my site.

With your results, I would definitely look if Kaveh dispersed or shifted his range by looking at the variogram shape and the DOF values in the fit summary (and simple plots of t versus x&y). I know that male jaguars will definitely do that sometimes. As for the other variation, assuming the conditions of range residency appear met, it can happen with environmental differences, like if some of the cheetah feed predominantly on hares and others on large game.

The ctmm package can accept DOP values (you should get a message on import if they are named correctly) and fit the UERE simultaneously with the movement model. There is a section in the vignette demonstrating something like this with E-Obs data. We will have a big paper on errors in the next few months.

Best,
Chris

Mohammad Farhadinia

unread,
May 15, 2017, 11:37:17 AM5/15/17
to ctmm R user group
Thanks Christen
I am working now on range overlap. Can I calculate range overlap of 6 individuals all at the same time or I need to calculate them pairwisely? It seems that each one produces different estimates.
Also, I did not understand well this phrase from the previous reply:


"As for the other variation, assuming the conditions of range residency appear met, it can happen with environmental differences, like if some of the cheetah feed predominantly on hares and others on large game."

Best
Mohammad



On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 15, 2017, 12:33:06 PM5/15/17
to ctmm R user group
Hi Mohammad,

overlap() can take a list of 6 individuals and a list of their 6 model fits. See help("overlap"). This will be faster computationally, but overlaps are pairwise by definition, kind of like correlations.

As for the quote, I don't know if your individuals live in a similar environments or not to expect similar movement behaviors.

Best,
Chris

Mohammad Farhadinia

unread,
May 18, 2017, 4:11:56 PM5/18/17
to ctmm R user group
Hi Chris
I did not understand your point about overlap. If I enter individuals pairwise, I got a different result if I enter all individuals at the same time. Which one is correct?

Mohammad

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 18, 2017, 7:24:27 PM5/18/17
to ctmm R user group
Hi Mohammad,

You should only get different results if you compare the overlap in Gaussian home ranges (overlap() with ctmm model fit objects) to overlap in KDE home ranges (overlap() with both ctmm model fit objects and telemetry objects). Tell me if help('overlap') can be clearer.

Otherwise, overlap with a pair of individuals gives you their overlap (point estimate and CIs), while overlap on 6 individuals should give you 6x6 such overlaps---one for every pair.

Best,
Chris

Mohammad Farhadinia

unread,
May 18, 2017, 7:31:22 PM5/18/17
to ctmm R user group
I am using these scripts:

GUESS <- lapply(DATA[1:6], function(b) ctmm.guess(b,interactive=FALSE) )
FITS <- lapply(1:6, function(i) ctmm.fit(DATA[[i]],GUESS[[i]]) )
names(FITS) <- names(DATA[1:6])
# Gaussian overlap between leopards
overlap(FITS)
# AKDE overlap between leopards
overlap(DATA[1:6],FITS)

But the result of each leopard is different if I only enter them pairwise as below:

GUESS <- lapply(DATA[1:2], function(b) ctmm.guess(b,interactive=FALSE) )
FITS <- lapply(1:2, function(i) ctmm.fit(DATA[[i]],GUESS[[i]]) )
names(FITS) <- names(DATA[1:2])
# Gaussian overlap between leopards
overlap(FITS)
# AKDE overlap between leopards
overlap(DATA[1:2],FITS)


Mohammad

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 19, 2017, 1:17:03 AM5/19/17
to ctmm...@googlegroups.com
Hi Mohammad,

Running the same analysis with the buffalo data, if I store the first round of overlaps as

OVER.G1 <- overlap(FITS)
OVER.K1 <- overlap(DATA[1:6],FITS)


and the second round of overlaps as

OVER.G2 <- overlap(FITS[1:2])
OVER.K2 <- overlap(DATA[1:2],FITS[1:2])


then I get

OVER.G1[1:2,1:2,] - OVER.G2

all zeros (exactly the same), and I get

OVER.K1[1:2,1:2,] - OVER.K2

differ by numerical error on the order of 1/10^5, which can be lowered by the res and error options of akde, but it should be insubstantial for an overlap value between 0 and 1 with comparably wide CIs.

Do you get numerically consistent results as well? How big are the differences?

Best,
Chris

EDIT (2017/05/19): I should note that I had to update some code (on Github now) to get all 6 buffalo to output KDE overlaps. The first three are so far from the last three that the KDE overlap is numerically zero, which was crashing my CI code. This probably wasn't an issue for you if you weren't getting errors.

Mohammad Farhadinia

unread,
May 25, 2017, 6:42:24 AM5/25/17
to ctmm R user group
Hi Christen
I managed to calculate each pair of leopards overlap.
Now I need to calculate core area of each one's home range, based on below paper.
In the attached paper, they have found a way to estimate core home range (rather than using the arbitrary threshold of 50%). The method looks straightforward, creating a regression between each isopleth and percentage of home range use. I am struggling how to use ctmm object to create this exponential regression.

An individual-based quantitative approach for delineating core areas of animal space use
E.Vander  Wal and A.R. Rodgers Ecological Modelling 224 (2012) 48– 53



On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:
Vander Wal and Rodgers 2012 EcoMod.pdf

Mohammad Farhadinia

unread,
May 25, 2017, 7:35:04 AM5/25/17
to ctmm R user group
One more question Christen ... Is there any way we could calculate overlap between two individuals not based on their 95% AKDE (as ctmm does now) but based on their core area (e.g. 50% AKDE)? I think calculating core area overlap can be more meaningful biologically.
Thanks
Mohammad


On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 25, 2017, 10:45:25 AM5/25/17
to ctmm R user group
Hi Mohammad,

ctmm does not calculate overlap based on 95% home range areas or any arbitrary threshold. ctmm uses the Bhattacharya coefficient (BC) to measure overlap. We will have a paper detailing this soon, including how we approximate the CIs, etc.. We scoured the statistics literature and found only two overlap measures that satisfy the criteria we lay out. One of our criteria was that arbitrary thresholds were not allowed.

In ctmm, you can call summary on the UD object to produce the magnitude of the area at different percentiles (95% is only the default). See help('summary.UD'). However, I would note that (1) as we argue in our recent MEE paper, 50% is not entirely arbitrary and (2) following the method you cite derived from Seaman and Powell, I'm pretty sure that one can construct a distribution function with multiple percentiles/areas that satisfy their delineation relation and I also think there is still an arbitrary threshold implied.

Best,
Chris

Mohammad Farhadinia

unread,
May 29, 2017, 11:00:49 AM5/29/17
to ctmm R user group
Hi Christen
Many thanks.
In my data, when I overlap AKDE home ranges, it shows high spatial overlap, but I can see that when I plot 50% AKDE, the overlap degree is much smaller.  I presume the current BC overlap score does not illustrate range overlap based on their core area. Do you have any idea how to calculate core area overlap? I think it is more meaningful.

Also, I understand your argument about 50% core area. What I am trying to do is to use ctmm UD final object to enter that paper (E.Vander  Wal and A.R. Rodger 2012). Currently, the paper codes only accept adehabitat object. Is there anywayI can enter ctmm UD object to do the regression for defining the threshold for core area?
Here are codes used in E.Vander  Wal and A.R. Rodger paper:
https://weel.gitlab.io/include_pubs/pdfs/Vander%20Wal%20and%20Rodgers%202012%20Core%20Area%20Designation%20Technique.r

Many thanks

Mohammad

 

On Friday, April 28, 2017 at 11:32:10 AM UTC+1, Mohammad Farhadinia wrote:

Christen Fleming

unread,
May 29, 2017, 12:24:35 PM5/29/17
to ctmm R user group
Hi Mohammad,

ctmm's overlap() doesn't calculate overlap between contours but between density functions, in a way that statisticians and physicists calculate overlap between distributions, without an arbitrary percentage threshold.
Geometric overlap between contours decreases with decreasing percent coverage (which makes it smaller for core areas) and this dependence is not smooth, which means that geometric overlap at percentage p1 may not be similar to geometric overlap at a nearby percentage p2, which can limit the generality of the inference. These are not measures that would you would find used/advocated in the statistics literature.

If you call names() on the UD object, you can see that its just a list with intuitively named components inside, all in SI units, so you can pull out whatever you need from there. The coordinates are in r$x and r$y. You can also export the UD object to other formats like sp and shapefile to do some other calculations.

Best,
Chris

Mohammad Farhadinia

unread,
May 30, 2017, 12:26:36 PM5/30/17
to ctmm R user group
Thanks Christen
I am creating a dataframe from UD object:

dd <- data.frame(akdeUD$r$x, akdeUD$r$y)

But there is a problem: x and y have different numbers of rows. This happens for all different individuals I have. So why there is such a mismatch between number of x and y?
In order to put out a data frame for analysing the core area, I need a variable equal to kernel.area (from adehabitatHR). Which variable in summary of UD object is equal to that?

I am still trying to enter ctmm UD object into below fucntion:

#This current version only treats one animal at a time and requries a dataframe with at least three columns:
#my.data$X (must be a capital "X") for the X coordinate
#my.data$Y (must be a capital "Y") for the Y coordinate

#This function relies on kernel.area (from adehabitatHR); currently it uses the kernel.area defaults, though the function can be modified to accomodate other parameters
#Similarly this function relies on nls() to fit the curve which may require different starting parameters depending on your data

#Cut and past the following 11 lines of code:

library(adehabitatHR)
core.area<-function(data){
	IVseq<-seq(0.01,0.99,0.01)#sequence of isopleth volumes
	kernel.areas<-as.numeric(as.character(kernel.area(kernelUD(SpatialPoints(data[,c("X","Y")])),percent=c(1:99))))#areas within the isopleths, modify kernel parameters here including coordinate systems
	df<-as.data.frame(cbind(IVseq,kernel.areas/max(kernel.areas)))#create a dataframe with the percent area (PA) and isopleth volume (IV) scaled from 0-1
	colnames(df)<-c("IV","PA")#name the columns
	nls.fit<-nls(PA~(b0)*(exp(b1^IV)), data=df, start=list(b0=0.01, b1=4.2), na.action="na.omit", model=TRUE)#Caution: Starting parameters may differ for your data.
	b0<-summary(nls.fit)$coefficients[1,1]#b0 coefficient
	b1<-summary(nls.fit)$coefficients[2,1]#b0 coefficient
	(-log(b0*b1)/b1)#isopleth volume where the curve's slope = 1
}

Christen Fleming

unread,
May 30, 2017, 4:38:04 PM5/30/17
to ctmm R user group
Hi Mohammad,

r$x and r$y are the coordinates of the grid. So, for instance, the probability density at grid point r$x[i] & r$y[j] is calculated to be PDF[i,j].

I am not familiar with adehabitat formats, but it looks like its also complaining about the data.frame columns not being named.

Best,
Chris

geneviev...@gmail.com

unread,
Sep 6, 2018, 6:35:46 AM9/6/18
to ctmm R user group
Hi Chris, 

Apologies for jumping in on a conversation here! I'm currently attempting to segment some telemetry data into resident and dispersal phases and was wondering if you had made any progress on this? At the moment I'm using a package that is in development called segclust2d for segmentation - but would love to compare another method if one was available!

Many thanks!

Genevieve

Christen Fleming

unread,
Sep 6, 2018, 3:49:01 PM9/6/18
to ctmm R user group
Hi Genevieve,

Unfortunately this is further out for us---probably 6 months if behavioral states get prioritized as the next research objective. Right now I am still focused telemetry error models, which turns out to require almost as much thought/effort as the movement models.

Best,
Chris
Reply all
Reply to author
Forward
0 new messages