Glad you are finding our materials useful. At first glance, I'm not too concerned about the radial distance histogram you provided nor the hazard rate fit to them. The pdf plot (bottom right) is more informative than the usual detection function plot (bottom left). Note the fitted hazard rate function doesn't try to fit that peak you describe.
A more likely culprit of a miscalculation
could be the calculation of effort. Revisit the number of
snapshot moments in your two hours of sampling effort per day.
We could discuss further off line, if needed.
I am new to distance sampling and have found this group and the distancesampling.org materials extremely helpful in teaching myself how to implement this methodology to a camera trapping study, so thank you all so much for that!
I am, however, wondering if I did everything correctly in my study. Going through the steps outlined in this example (MailScanner has detected a possible fraud attempt from "examples.distancesampling.org" claiming to be Analysis of camera trapping data (distancesampling.org)), all models ran successfully and yielded density estimates with a decent CV (0.24) after 1000 bootstraps, but my density estimate itself was ~275 animals/km2. I am studying a non-native ungulate on an island where it is abundant and likely overpopulated, but I am concerned that I have unknowingly done something in my modelling or data collection to cause bias and overestimation as this is a very high density estimate. This population has never been studied before so I have nothing to directly compare my results to, though similar systems have reported densities that are not statistically significantly different from mine.
I am happy to provide more detailed information about the study design, results, etc. off list but here is a brief description.
Cameras were deployed at 23 sites for ~900 active trap days (full 24-hr days, no malfunctions or researcher visits) resulting in ~7000 videos of the target species. Due to this high number of videos I limited analysis to peak activity times (2 hours total) as determined by a histogram of video start times (~1000 videos remained). After excluding videos with obvious reactivity to the camera I was left with ~500 videos and pulled ~5000 distance measures from those videos. Distances to each individual's midpoint in the FOV at the snapshot moment (t=2) were recorded. These data resulted in the attached histogram of detection distances, which I thought looked ok when compared to other histograms in similar published studies (Howe et al. 2017, Bessone et al. 2020). Those data were best fit to a hazard rate model without adjustments as seen in the detection probability graph yielding the PDF seen in the neighboring graph. I overlooked the extremely high first bin in the detection probability histogram, but I have a feeling something is wrong with that.
If anyone sees any red flags that would explain an overestimation or has suggestions I would appreciate their insights.
You received this message because you are subscribed to the Google Groups "distance-sampling" group.
To unsubscribe from this group and stop receiving emails from it, send an email to distance-sampl...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/fe6d2062-a1d2-4af7-9ae8-c6e41868a3a0n%40googlegroups.com.
-- Eric Rexstad Centre for Ecological and Environmental Modelling University of St Andrews St Andrews is a charity registered in Scotland SC013532
I've walked through the calculation of effort in the "peak activity" analysis of the Maxwell's duiker data presented in Howe et al. (2017), which is the data set shipped with the Distance package. Follow this description of Howe et al. (2017:1560, bottom of column 2):
"Maxwell’s duikers were sampled from 28
June through 21 September 2014... Second, we assumed that all
animals were available only during apparent times of peak
activity (6.30.00–8.59.59 h and 16.00.00–17.59.59 h) and
recalculated temporal eﬀort and censored distance observations
accordingly (Tk/t per day = 8098)"
Converted into R code (using the `hms` package for time calculations)
startp1 <- as_hms("06:30:00")
endp1 <- as_hms("08:59:59")
startp2 <- as_hms("16:00:00")
endp2 <- as_hms("17:59:59")
dur.p1 <- difftime(endp1, startp1, units="secs")
dur.p2 <- difftime(endp2, startp2, units="secs")
moments.m2 <- ((as.numeric(dur.p1) + as.numeric(dur.p2)) / snapshot.interval) - 1
effort.m2 <- floor(moments.m2) *
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/8db8c163-33c5-4fb9-a056-20fdba7a202dn%40googlegroups.com.
Happy to hear your effort calculations are sorted out.
With regard to `bootdht` and `sample_fraction`; the code you provided does not include `sample_fraction` in your call to `bootdht`. Specify it as an argument:
function (model, flatfile, resample_strata = FALSE, resample_obs = FALSE,
resample_transects = TRUE, nboot = 100, summary_fun = bootdht_Nhat_summarize,
convert.units = 1, select_adjustments = FALSE, sample_fraction = 1,
multipliers = NULL)
If you provide the sampling fraction to
`bootdht`, your results should work out better. Let us know how
you get on.
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/5a8f09fe-ee2b-4b03-ad99-680665eddc08n%40googlegroups.com.
I did a quick check of the effect of altering the `sample_fraction` argument in `bootdht`. Indeed, changing the value of that argument *does* have an impact upon the density estimates reported by `bootdht`:
> summary(daytime.boot.hr) #
sample_fraction = 0.117
Boostraps : 50
Successes : 48
Failures : 2
median mean se lcl ucl cv
Dhat 19.94 18.74 7.48 7.13 31.37 0.38
> summary(nofrac.boot.hr) # sample_fraction = 1.0
Boostraps : 50
Successes : 50
Failures : 0
median mean se lcl ucl cv
Dhat 2.8 2.91 1.09 1.29 5.68 0.39
So I think `bootdht` is making use of the
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/8713dc51-3a37-4e46-963a-e65b16952f7bn%40googlegroups.com.
The results I sent to you yesterday were
from the development version, from a Github branch. I'm happy
to walk you through the installation of the package from the
development version off-line. Then you can give it a test and
report back to the list with your findings.
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/71cd4b52-eca3-448d-9097-7dae48f8dc04n%40googlegroups.com.