Range Residency in a Forest Bird with Variable Space Use

772 views
Skip to first unread message

Tyler Hodges

unread,
Oct 16, 2023, 4:10:47 PM10/16/23
to ctmm R user group
Hello group,

I don't mean to overly belabor the topic, but I was wondering about the range residency status of a subset of forest birds that I tracked via traditional VHF this past summer. Most of the birds demonstrated bounded home ranges throughout the breeding season with the variograms reaching obvious asymptotes, while others have variograms that are a bit questionable. Specifically, these birds had space use that changed slightly during different periods of the season, whereas the other birds had home ranges that remained static during the same time periods. As expected, the variograms for these birds are not typical, although for one of the birds with the least static whole-season space use (see discussion below), an asymptote does seem apparent. I am trying to get a handle on whether a full season analysis may be possible and appropriate with these birds, and if a mean breeding season home range size would be usable to inform other aspects of the study and analysis (e.g., I want to use the mean home range size to create a biologically meaningful buffer distance in which to summarize habitat covariates for point-counts conducted on the same study sites). Below, I share the questionable variograms and briefly discuss each bird's space use. 

Note: When I refer to "movements", these are often only on the scale of a few hundred meters. 

Bird 1
Screenshot 2023-10-16 145920.png
I was able to track this bird almost throughout the entire breeding season. At the beginning of the season, it had fairly static space use and was restricted to a small area, but during the post-fledging period its area of space use expanded greatly, and it was often found a few hundred meters from the area of original use. Then, during the post-breeding period, it briefly returned to its original area of use before settling in an area not too far from its original location. Despite this variable space use, however, the variogram seems rather stable. 

Bird 2
Screenshot 2023-10-16 150301.png
This bird had a very similar space use history to bird 1, with it occupying a fairly static area for much of the season, briefly moving away from this area during the post-fledging period, and then returning to the original location during the post-breeding period. However, rather the apparent asymptote of the first bird, this one has a distinct bell shape. 

Bird 3
Screenshot 2023-10-16 150341.png

This bird also has a bell-shaped curve. It had settled in one area for some time, moved to an adjacent area for what I suspect to be its first nesting attempt, and then moved back to the original location for the rest of the season, where we located an active nest.

Bird 4
Screenshot 2023-10-16 150005.png
Despite what seemed like fairly stable space use, this bird has a distinctly rising tail at the end of the variogram. I know the end of variograms can typically be discounted, but the rising pattern also seems to be a signature of variograms indicating non-range resident behavior, which is why I want someone with more skilled eyes to take a look at it for me. 

Bird 5
Screenshot 2023-10-16 150922.png
Same story as bird #4, although it did have a couple tracking sessions that were located somewhat farther from its area of core use than normal. 

Bird 6
Screenshot 2023-10-16 150406.png
Despite having an asymptote, this bird did slightly shift its area of core use during its second nesting attempt, although it was closer to its initial core use area than some of the movements displayed by the other birds discussed above. 

Thanks in advance for any guidance you all can provide!

Best,
Tyler


Christen Fleming

unread,
Oct 16, 2023, 10:22:54 PM10/16/23
to ctmm R user group
Hi Tyler,

Just from the variograms, 1,4,5,6 don't appear significantly off, though you can use CI="Gauss" for better variogram CIs that will be narrower after the asymptote.
2 is intermediate and 3 is significantly and substantially off (the white space between the empirical and theoretical CIs). These variograms do match up to the behavior you describe.
I agree that 4,5,6 look like they might be increasing, but variogram errors are autocorrelated, so you can't necessarily ascribe a trend to that, as its within the expected range of errors.

Best,
Chris

Tyler Hodges

unread,
Oct 17, 2023, 11:57:26 AM10/17/23
to ctmm R user group
Hello Chris,

Thank you for your expertise! Highlighting the difference in CI overlap between the fitted model and the empirical variogram also helps immensely in understanding how to interpret them. I will start to play around with various cluster functions and/or determine the most ecologically-relevant way to split up the data. 

Unfortunately, setting the CI method to "Gauss" maxes out my system memory and fails to run every time. 

Best,
Tyler

Tyler Hodges

unread,
Oct 30, 2023, 10:25:04 PM10/30/23
to ctmm R user group
Hello Chris,

Thank you again for your assistance thus far! I have been experimenting with various segmentation methods and decided to implement segclust2d, despite the fact that it does not operate in a ctmm framework and doesn't account for autocorrelation. My attempts at dividing the locations using an ecologically informed sliding window were thwarted given the variable movement patterns (and their corresponding timing) of these birds and uncertainty in assigning them to specific breeding phases. segclust2d successfully identified and segmented the birds that the variograms identified as being piecewise stationary, and also revealed a more cryptic range shift in one of the birds which will be interesting to explore as the analysis progresses. However, before I proceed on to the next steps of the analysis (abundance modeling and RSF), I have one more variogram question for you. 

The following segmap and variogram are from Bird 2 in the initial post. The variogram corresponds to the second range (blue). This second range captures the post-fledging and post-breeding space use of the bird. I am concerned by the sharp drop at the end of the variogram, which I assume corresponds to the bird's return back to the area of initial space use. I know the end of the variogram can often be ignored, so is this concern warranted (i.e., should I start to consider censoring this range)? 

Screenshot 2023-10-30 214750.png
Screenshot 2023-10-30 214340.png


Again, thanks for your help! ctmm is very new to the members of my research sphere, so I am sorting most of this out myself. 

Best,
Tyler

Jesse Alston

unread,
Oct 31, 2023, 11:56:30 AM10/31/23
to Tyler Hodges, ctmm R user group
Hi Tyler,

What is the DOF$area of this bird? It is more concerning that the variogram keeps drifting up before it returns than that it drops back down at the end.

To me, this looks like an exploratory phase, so not something a home range estimate would describe well. Instead of assuming the animal is range-resident, you can use displacement or something that is residency-neutral to compare movement behaviors across the different time periods.

Jesse

--
You received this message because you are subscribed to the Google Groups "ctmm R user group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ctmm-user+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ctmm-user/61964514-9e0e-4df0-bd77-7dc84ee556d3n%40googlegroups.com.


--

Tyler Hodges

unread,
Oct 31, 2023, 1:17:19 PM10/31/23
to ctmm R user group
Hello Jesse,

Thank you for your response! The DOF for the second segment is 5.39, and 9.29 after bootstrapping. 

For a couple of the birds, the space use did markedly change during the post-fledging period, with individuals roaming farther than during the nesting period and exploiting areas that were previously unused, which is what we are capturing here. In both instances where we 1) knew birds were in the post-fledging period, and 2) space use markedly changed, they always returned to their original (or very close to it) range after fledglings became independent. However, it is very difficult to assign some birds to specific phases as we weren't always able to locate nests and/or fledglings. Complicating things further is the fact that the other bird which demonstrated similar movement and space use patterns as this individual was considered range resident by variograms, segclust2d, and marcher. 

The primary goal of the study is to assess habitat selection patterns and space use at the scale of the home range, so, as much as I'd hate to throw out data, I wonder if the best thing to do is censor this segment if it is indeed non-stationary.

Thanks again!

Tyler

Christen Fleming

unread,
Oct 31, 2023, 10:40:48 PM10/31/23
to ctmm R user group
Hi Tyler,

The variogram drops back down because there are only two range crossings and the individual comes back to where they started. It's at the end of the variogram, so it doesn't really matter.

Bootstrapping takes a long time to give convergent DOF estimates, so in the latest development versions of the package on Github, I don't let the DOF estimates update until the point estimate error gets to around 0.1%. If you used default arguments on a less than very recent version of the package, your DOF is probably closer to 5 than 9.

My inclination would be to categorize segments by behavior and summarize habitat selection and space use by behavior as well.

Best,
Chris

Tyler Hodges

unread,
Nov 1, 2023, 10:00:48 AM11/1/23
to ctmm R user group
Hello Chris,

Regarding the variogram, that's also what I suspected- thanks for the confirmation!

I was wondering why there was such a drastic change in DOF after the bootstrap. I'll download the latest update from Github and rerun them. I have been using 1.1.0 up until now. 

I do like the idea of summarizing everything by behavior, and it should remedy many of the issues I am encountering. I really wish we spent more time staking out nests and looking for fledglings this past season to facilitate that- lessons learned! Luckily, for the majority of birds that did have wildly different movement patterns, we are able to assign behaviors to them, whereas many of the birds we are uncertain about were range resident for the entire season, which should help. 

Thanks again!

Best,
Tyler

Tyler Hodges

unread,
Nov 11, 2023, 2:23:25 PM11/11/23
to ctmm R user group
Hello everyone,

I have been playing around with segclust2d again, and I noticed that when using the segclust() function, I am prompted to thin any repeat locations from the dataset. This makes sense given that segclust() uses speed and turning angles to cluster the segments by behavior. However, no such prompt occurs when using the regular segmentation function, and I have been feeding the entire set of locations for each bird into it as a result. I know the segmentation function leverages the mean and variance of the locations themselves to segment the track, and thus should be able to handle repeat locations. However, even though it can handle repeat locations, I was wondering if the best practice would still be to remove duplicates? Initially my thoughts were that leaving in locations where birds sat in the same tree/clump of trees for the entire 30-minute window would give an overall better snapshot of space use by picking up on roosting behavior and favorite singing perches, but of course, using unique locations versus the entire dataset will yield quite different results, and I want to make sure I am applying the best practices. I have been setting lmin to 16, which is one more location than the maximum number of locations possible in a 30-minute tracking window (this would ensure that extraterritorial forays lasting only one session are not separated into its own unique segment). On a few rare occasions, we were able to reach this maximum, but only when the bird was stationary, so I may have to adjust the lmin to the mean number of locations per session or something similar if I end up removing duplicates. 

As always, thank you for your assistance! 

Best,
Tyler

Tyler Hodges

unread,
Nov 12, 2023, 9:33:22 PM11/12/23
to ctmm R user group
Upon my second read through of the segclust2d paper, I realized that they used speed and turning angle as a demonstration, but that the default of segclust still operates on the location data itself. The paper is still unclear, however, on whether the removal of duplicate locations would be prudent. The vignette, on the other hand, does briefly touch on the subject, and I now get the impression that, regardless of whether segmentation or segmentation-clustering is used, that duplicates should be removed as they can lead to calculations of null variance. I simply did not have any issues while using the segmentation-only function because any repetition that did occur was always smaller than the lmin of 16. Is this correct?

Thanks!
Tyler

Christen Fleming

unread,
Nov 13, 2023, 8:28:24 PM11/13/23
to ctmm R user group
Hi Tyler,

If the mean and variance are calculated over the entire segment, then I don't think duplicates would matter for those statistics.

Best,
Chris

Tyler Hodges

unread,
Nov 13, 2023, 9:47:10 PM11/13/23
to ctmm R user group
Hello Chris,

Noted! Thank you!

Best,
Tyler

Tyler Hodges

unread,
Nov 20, 2023, 11:10:24 AM11/20/23
to ctmm R user group
Hello Chris,

While creating average UDs for the four individual birds that had multiple stationary home ranges, I am encountering the following error when running the mean() function: 

Error in if (SCALE[i] < .Machine$double.eps) { : missing value where TRUE/FALSE needed

The traceback indicates that this is occurring in the meta.normal() portion of the code, and based on a quick search, seems to be the result of an error in an if or for loop. I had no issues with the three other birds, and I verified that the projections were identical for each individual telemetry object and the resulting UDs produced on the same grid. I also plotted the UDs on the same grid to make sure there was no issue there. Below is the code I used to produce the error. Do you have an idea of what may be causing this?  

Steven_Telemetry_List <- list(Steven1t, Steven2t)
Steven_Fit_List <- list(ctmmsteven1, ctmmsteven2)
StevenUD_Same_Grid <- akde(Steven_Telemetry_List, Steven_Fit_List, weights = TRUE)
plot(StevenUD_Same_Grid)
Stevenmean <- mean(StevenUD_Same_Grid, sample = TRUE, weights = c(.59, .41))
plot(Stevenmean); plot(Steven1t, add = TRUE); plot(Steven2t, add = TRUE)

Thanks!
Tyler

Christen Fleming

unread,
Nov 20, 2023, 10:05:05 PM11/20/23
to ctmm R user group
Hi Tyler,

Can you send me an minimal working example (data + script with the telemetry & fit objects) to inspect.

Best,
Chris

Tyler Hodges

unread,
Nov 20, 2023, 10:43:31 PM11/20/23
to ctmm R user group
Will do, thanks Chris! 

Tyler Hodges

unread,
Dec 20, 2023, 12:05:20 PM12/20/23
to ctmm R user group
Hello everyone,

Thanks again for the assistance! I am finally moving into the habitat selection portion of my thesis research. However, before I proceed, I want to clarify a few items. After segmenting the bird tracks (and excluding the segment of questionable stationarity discussed above), I used the mean() function to get average UDs for the birds with multiple stationary ranges. I then fed these average UDs and the UDs from the stationary birds into meta() to get the average home range and core use areas for all birds and then between birds dwelling within different treatment types (managed versus unmanaged forests). If I have comprehended the other discussions and the related papers correctly, this is the appropriate workflow to determine the average space use and to compare space use between management types. For this comparison, I am using effect size ratios rather than P-values as recommended by the Flemming et al. 2022 paper (Population‐level inference for home‐range areas - Fleming - 2022 - Methods in Ecology and Evolution - Wiley Online Library). I find effect size ratios to be intuitive and easy to interpret. However, my first question relates to producing effect sizes for the confidence intervals: am I correct in thinking that to obtain the appropriate CIs for the effect sizes, you compare the ratios for the lower and upper bounds as you would the space use estimate itself? 

For the habitat selection analysis, we are planning to utilize a conventional approach to compare selection between the 95% home range and 50% core use areas (more discussion on this in a moment). Is it more appropriate to use the average UDs for this, or the individual stationary ranges? I know the iRSF approach implemented in ctmm requires the individual ranges since everything is contingent on the stationary movement models, but I imagine differences would be expected when using the individual versus average UDs in a conventional analysis as well. The UD contours appear very similar when comparing the average versus overlain individual ranges, but not exact. 

Lastly, while reading other threads on RSFs in the group, I noticed that Chris and Jesse (and probably others) are now discouraging RSFs that compare different contours (e.g. 95% and 50%), despite this being a common practice in the literature and a goal of my own study. I think Jesse stated that the reasons for this are two-fold: 1) that it creates statistical challenges and 2) it creates philosophical issues by excluding some of the used locations from the analysis. I was hoping for more discussion and insight into why this practice is now being discouraged so I have a better grasp on the issue as I move into my own analysis (and thus can make better decisions). I read the recent iRSF paper (Mitigating pseudoreplication and bias in resource selection functions with autocorrelation‐informed weighting - Alston - 2023 - Methods in Ecology and Evolution - Wiley Online Library)- which was very eye-opening- and I intend to utilize it for future analyses, but because a goal of this study has always been to compare selection between the 95% and 50% contours, I was still hoping to use a conventional approach. I'm looking forward to more insight on this issue.

Thanks!

Christen Fleming

unread,
Dec 27, 2023, 10:01:42 PM12/27/23
to ctmm R user group
Hi Tyler,

The effect sizes reported by meta() come with CIs, so I don't think I understand the first question.

For expectation values, I would use the average UD, weighted by time spent in each area.

We are writing a paper on this, but the conventional available-area method presupposes a null model with known parameters that cross validates very poorly and doesn't propagate its uncertainties or sensitivities into the covariate parameters. It is possible to calculate an iRSF just by choosing a very large "available area" and then including the covariates x, y, x^2+y^2, but you also want to use importance sampling and sample from a pilot Gaussian for computational speed.

Best,
Chris

Tyler Hodges

unread,
Dec 29, 2023, 10:44:50 PM12/29/23
to ctmm R user group
Hello Chris,

Thanks much! Apparently, I didn't read the fine print of meta() close enough, as I missed the part about the ability to feed in a nested list of UDs. After doing so, I obtained the effect sizes and associated CIs without issue. 

I'm looking forward to reading the forthcoming paper! After I finish this analysis, I will be transitioning to another with a multi-year elk dataset, and I'm hoping to utilize ctmm's iRSF functionality for that. For now, I'm trying to take all of the necessary steps to ensure the conventional approach I'm using for this analysis is as statistically rigorous as I can manage (e.g. sensitivity analysis to determine the optimal number of available points and weighting to aid in parameter interpretation). 

Thanks again!

Best,
Tyler

Tyler Hodges

unread,
Jan 6, 2024, 4:15:47 PM1/6/24
to ctmm R user group
Hello Chris,

Happy New Year! After a recent conversation with my committee, it was decided that we are actually going to implement the iRSF approach available in the ctmm package! I'm still going to compare habitat variables in the core use and home range contours, but through non-RSF techniques such as regression or comparisons of means. 

I have already started to run models on my birds, but I have encountered a peculiar issue. For these analyses, I am using a set of seven raster covariates. However, when I include all seven in a list, I start to receive the message "Warning: longer object length is not a multiple of shorter object length" for every model when rsf.select() fits two and three covariate additive models. Strangely, this does not occur when I reduce the number of rasters to two or even six. Is this an issue/warning you have encountered before? Is there any way to remedy it, or will it still fit the models fine?

I also have a conceptual question: considering that the domain of availability with these iRSFs is a Gaussian area around the home range rather than entirely within the home range itself, is this still considered third order selection? Or something else entirely?

Thanks for the help!

Best,
Tyler 

Christen Fleming

unread,
Jan 10, 2024, 8:46:36 AM1/10/24
to ctmm R user group
Hi Tyler,

Does this warning happen if you update the package from GitHub? I think I might have fixed that issue recently.
If it persists, then if you could please provide me with a minimal working example (data + script), then I can take a look at it.

Regarding the iRSFs: The RSF parameters are third order, as usual. The availability parameters are second order, but they are only phenomenological.

Best,
Chris

Tyler Hodges

unread,
Jan 10, 2024, 8:39:07 PM1/10/24
to ctmm R user group
Hello Chris,

Thank you for the help! Unfortunately, uninstalling and then reinstalling directly from GitHub didn't help. I'm curious if this may be some sort of memory issue? The rasters I am working with are 10m resolution and cover the entirety of Pennsylvania. Even after cropping them to slightly larger than the extent of my study sites before feeding into rsf.select, I'm still working with over 9,000,000 cells. I'm not particular knowledgeable when it comes to computing, so I could be very far from the mark, but I wonder if adding that 7th raster layer to the list causes issues with maxing out memory by the time rsf.select starts to fit additive models? I had to change the max.mem argument to 3gb to get most of the models to run in the first place. 

If you don't think that's the issue, then I can get the data, rasters, and script to you ASAP!

Best,
Tyler

Tyler Hodges

unread,
Jan 11, 2024, 3:13:52 PM1/11/24
to ctmm R user group
Hello again Chris, 

I've tried troubleshooting a couple more things (including cropping the raster extent down to 10x the track extent for a particular bird), but to no avail. I'm now receiving the same warning from rsf.select() for additive models even with only three rasters in the list, so I am at a loss as to what the issue is. Strangely, despite this warning, rsf.select() still includes these additive models in the AICc table, and the results seem meaningful, so I'm not sure what that indicates. I'll get a minimal example to you this afternoon or evening. 

Unrelated, but as you might expect, including all seven rasters in rsf.select takes a very long time to process (10+ hours an individual). Considering that I have three groups of highly correlated variables, and many of the additive models rsf.select is fitting are going to be tossed out as a result, I think I'm going to run a series of three univariate model sets for each bird (one for each group of correlated variables). From those, I would retain the most meaningful covariate to carry forward in the additive models. I think this makes the most sense from both an ecological standpoint and a computation/time saving standpoint. 

As always, thanks for your effort and help!

Best,
Tyler

Message has been deleted

Tyler Hodges

unread,
Jan 31, 2024, 4:17:02 PM1/31/24
to ctmm R user group
Hello Chris,

Thanks again for getting the above problem fixed! RSFs are now running without issue. I'm currently using mean() to estimate the individual parameters for birds with multiple ranges and to estimate population selection patterns. I was wondering how I can retrieve the beta parameter coefficients and CIs from the averaged models? Unlike the models returned from rsf.select(), I do not see any beta coefficients reported in the output from mean(), although the rest of the movement model parameters are there. Is the issue simply that the null model was best after averaging? 

As always, thanks!

Tyler

Christen Fleming

unread,
Feb 2, 2024, 4:21:16 PM2/2/24
to ctmm R user group
Hi Tyler,

If you run mean() on the outputs of rsf.select() then both the betas should be there and the population variation in the betas should be there (if supported). If they are not, then please send me a minimal working example to look at.

If you run mean() on the UDs, then the betas would not be included.

Best,
Chris

Tyler Hodges

unread,
Feb 3, 2024, 8:26:16 AM2/3/24
to ctmm R user group
Thanks Chris! 

Should I be feeding the results of rsf.select() into mean() in any particular way? Should I feed in the entire model list, or pull out the results of the top model only? I've tried both ways, but when I include the entire output from rsf.select() I get a C stack warning, and when I enter the top models (as below), the output only contains the movement model parameters. 

stingmeanrsf <- mean(list(Sting1RSF[[1]], Sting2RSF[[1]]), sample = FALSE, weights = c(.6, .4), trace = TRUE)

I'm also running a separate model set with a categorical forest cover type raster with four levels. However, for some of the birds, I get a warning about the MLE being too close to the boundary and that the optimizer may have failed. In most cases, the outputs seem fine, but some birds end up with an empty COV matrix (all NaN). The original cover type layer has 30+ categories, so I reclassified everything into four broad categories (e.g. terrestrial forest, palustrine forest, etc.) and set terrestrial as the reference category. Terrestrial forest is present in every bird's UD, but a couple of the other categories are rare, occurring in only a few UDs. Would the failed optimization have anything to do with the scarcity of some of these categories, and if so, what is the best way to remedy this while still maintaining consistency between birds and study sites? 

Thanks! 

Christen Fleming

unread,
Feb 4, 2024, 3:53:05 PM2/4/24
to ctmm R user group
Hi Tyler,

It should be a list of the top models. I've made a note to add an informative error message otherwise, but in the future that would result in a 3-level hierarchical model.
If the beta coefficients are being dropped from the top model, then please send me a minimal working example (list of top fit objects should do it).

That warning means that some parameters are near a boundary and selection would likely drop them.
The NaNs could be from an unsupported parameter (can't differentiate the likelihood w.r.t.), which a rare category could give you. If this is the issue, then more merging of categories would be required.
The NaNs could be from a bad reference category. The reference category needs to be sampled in every dataset and not just in every UD. If you require different reference categories within a population, then I can move that feature up in priority. As of now, mean() needs the same reference category.

Best,
Chris

Tyler Hodges

unread,
Feb 5, 2024, 2:04:17 PM2/5/24
to ctmm R user group
Hello Chris,

Great, thanks! The fit objects are inbound via email. 

Given that terrestrial forest is by far the most common type, and should be present within every dataset, I think the issue is likely the rare categories. I will try combining further or reworking how I have the cover types grouped. Thanks!

Best,
Tyler

Tyler Hodges

unread,
Feb 12, 2024, 4:21:38 PM2/12/24
to ctmm R user group
Hello Chris,

I have another question for you pertaining to iRSFs. For several of the birds, the top model is the null and it's in the competing set for several others. Initially I thought this may be because the covariates I am using are poor predictors of used locations, but now I am wondering if the data are too sparse to accurately detect selection/avoidance? The total sample size for most ranges is only 40-100 points, so quite limited. 

Thanks!

Best,
Tyler

Christen Fleming

unread,
Feb 21, 2024, 7:08:52 PM2/21/24
to ctmm R user group
Hi Tyler,

I think it should be fixed now. (There was an issue with the new functional response code.)

That can be. Low effective sample sizes can mean that the individual didn't spend much time in different habitats as compared to how long it takes for the individual to get around.

Best,
Chris

Tyler Hodges

unread,
Feb 21, 2024, 8:56:36 PM2/21/24
to ctmm R user group
Fantastic, thanks Chris! 

Where mean() is concerned, in cases where the null model is in the competing set, do you recommend defaulting to the null as is usually recommended, or using the top model with covariates returned by rsf.select()? 

I do wonder if the homogeneity of forest structure within home ranges is also a contributing, or perhaps the driving, factor as to why the null was so often selected. From my core use-peripheral home range forest structure comparisons, it is evident that there is very little variation within home ranges. This is also apparent when overlaying the home ranges over structural rasters. However, the birds that did have non-null models clearly have some of the most heterogeneous structure, and their sample sizes often overlap with individuals whose top models were null. Certainly, something to contemplate further for my discussion. 

Thanks!
Tyler

Tyler Hodges

unread,
Feb 22, 2024, 1:26:44 PM2/22/24
to ctmm R user group
Hello Chris,

Sorry for yet another question, but I am now encountering a couple issues with mean() that may or may not be related to the functional response code. I was able to average iRSFs with mean() for individuals with multiple ranges weighted by the time spent in each range, but I am having trouble calculating population averages. When I feed in a list of the top rsf.select() models alongside models produced via mean(), I run into the following error:

code-
managedrsf <- mean(list(almeanrsf, stevenmeanrsf, BrianRSF[[1]], JohnRSF[[1]], StevieRSF[[1]], RingoRSF[[1]], ElvisRSF[[1]], MickRSF[[1]]), IC = "AICc", trace = TRUE)

error-
Warning: coercing argument of type 'list' to logicalError in mean.features(x, debias = debias, weights = weights, select = select, : 'list' object cannot be coerced to type 'logical'

To try to diagnose the problem, I took out the two averaged iRSFs and substituted them for the individual models that were averaged together, and I instead encounter this error:

code-
managedrsf <- mean(list(Al1RSF[[1]], Al2RSF[[1]], Steven1RSF[[1]], Steven2RSF[[1]], BrianRSF[[1]], JohnRSF[[1]], StevieRSF[[1]], RingoRSF[[1]], ElvisRSF[[1]], MickRSF[[1]]), IC = "AICc", trace = TRUE)

error-
* Model selection for autocovariance distribution. Error in sum(formula) : invalid 'type' (character) of argument

The latter error occurs later in the model fitting process. Do you have any idea what the issue could be? If you need a working example and/or fit objects to reproduce the error, let me know! 

Thanks,
Tyler

Christen Fleming

unread,
Feb 23, 2024, 11:32:48 PM2/23/24
to ctmm R user group
Hi Tyler,

Generally, I would feed the selected models into mean(), but you can use a more complex model as long as the likelihood increases a bit with each parameter so that the covariance estimates look reasonable.

And please send me a minimal working example for the errors. I haven't seen those.

Best,
Chris

Tyler Hodges

unread,
Feb 24, 2024, 9:53:15 AM2/24/24
to ctmm R user group
Thanks Chris! I just emailed you a working example. Let me know if you need anything else. 

I suppose my question was less about using a more complex model and more about using a less parameterized model (i.e. the null model). In some other model building/selection contexts, when the null is in the competing model set and you are following parsimony, that instead becomes your top/default model. I was wondering if that should be the case with rsf.select() and mean(), as several birds have top models that have covariates but with the null in the competing set.  

As always, thanks for the good discussions and assistance! 

Best,
Tyler

Christen Fleming

unread,
Feb 27, 2024, 8:54:58 PM2/27/24
to ctmm R user group
Hi Tyler,

AIC and BIC reward parsimony, but with the objective of obtaining asymptotically optimal predictions (AIC) or consistency (BIC).

Best,
Chris

Tyler Hodges

unread,
Feb 28, 2024, 10:27:02 PM2/28/24
to ctmm R user group
Hello Chris,

Noted, thanks! I suppose I need to dedicate some time to figuring out what is going on under the hood with these information criteria. 

Best,
Tyler

Reply all
Reply to author
Forward
0 new messages