Calculating Fletcher c-hat and applying adjustment to a secr.fit model

195 views
Skip to first unread message

Jennifer E Nelson

unread,
Jun 23, 2020, 9:21:43 PM6/23/20
to secr

Hello secr users,


I am a Master's student at Oregon State University, USA. My graduate research focuses on estimating the density of Roosevelt elk (Cervus canadensis roosevelti) in Oregon’s Coast and Cascade Ranges. We have two study areas (~3,700 and 6,850 km²) from which we collected fecal pellets from transects in 2018 and 2019 using a clustered design and one occasion each year.


I determined from herd composition data, collected by aerial surveys, that it is not unusual for elk in my study areas to congregate in large herds. Below is a histogram of the number of individuals in the herds observed where the mean = 17 and median = 14 individuals. In the second study area, sometimes herds comprised of 100+ individuals and mean group size was 30. I read “Consequences of ignoring group association in spatial capture-recapture analysis” which recommends testing for overdispersion by calculating Fletcher c-hat, then using c-hat as a variance adjustment to gain accurate density estimates, especially when individuals tend to aggregate in groups of more than 8 individuals.


The supplemental material to this paper provides code to calculate c-hat within a function that was used to create a simulated data set. I cannot find or understand how they then applied the adjustment once c-hat was calculated. I have attached the paper and supplemental material to this email. A huge thank you to the authors for conducting this research.


I am wondering if someone can give me advice on if it's possible and how to calculate c-hat from a secr.fit model that's fitted to data. Then, how to apply the adjustment. Please let me know if you need more information to answer this question.


Thank you in advance,


Jennifer E. Nelson
MSc Student
Department of Fisheries and Wildlife
Oregon State University

HerdPlot.png

 


wlb.00649.pdf
wlb-00649_1.pdf

Murray Efford

unread,
Jun 24, 2020, 7:03:43 PM6/24/20
to secr
Hello Jennifer
I think the authors of the paper are best placed to address this, and I've sent a note to one, but you may want to try them directly if nothing comes through soon. It would be helpful if 'secr' provided Fletcher's c-hat, but I haven't yet figured out the best way to do it.
Murray

ri bi

unread,
Jun 25, 2020, 3:42:08 AM6/25/20
to secr

Hi Jennifer,

 

Before I try to answer your question, just a quick comment: the correction for overdispersion does not give you more accurate point estimates of your parameters. It adjusts your variance estimates, meaning you get a measure of (im)precision associated with your parameter estimate that more closely reflects the true underlying uncertainty. 

 

Calculation of c-hat based on SECR model fitted to empirical data: I would use the script section for calculating c-hat at the end of the runone() function in the supplement. Since you are using empirical data, you will need to use your SECR estimates for density and the intercept and scale parameter of the detection function to calculate the expected number of unique animals detected at each detector (“expected.nk” in the script). Note also that the model in our example assumes that density and baseline detection probability were homogenous across the habitat (so expected.nk is the same throughout the detector grid/space). A model that accounts for heterogeneous density (i.e., through the use of a spatial covariate on density), aside from the clustering of individual activity centers into groups, would presumably require a different calculation of expected.nk.

 

Correcting variance using chat: You can multiply the variance of the SECR-estimated density D with c-hat. Then take the square root of that value to obtain the standard error, which you can use further to calculate the adjusted confidence interval:

adjusted.SE = sqrt(Var(D)*c-hat)

95% confidence limits = D +/- 1.96 adjusted.SE (assuming normally distributed error)

In our example, a log-link function was used in the submodel for density, so take the exponent of the point estimate and the adjusted confidence limits to get back to the original scale.

 

Best,

Richard

Jennifer E Nelson

unread,
Jun 28, 2020, 11:23:50 AM6/28/20
to secr

Thank you, Richard, Murray, and Dan,

 

I appreciate your quick responses with the resources and advice. This gives me a good direction. I’ll reach out if I have further questions.

 

Thank you, again!

 

Jennifer

Reply all
Reply to author
Forward
0 new messages