Dear all,
I have recently returned to a baboon density estimate analysis I did last year using CTDS and REM. In general the actual estimates are as expected, but the CTDS estimate in Survey 2 has significantly lower precision than that of Survey 1, or of REM in the same survey. I am trying to make sense of why this might be. Table of key figures below.
The number of sites at which baboons were detected between the two surveys were very different with Survey 1 catching them at 21/24 sites, with Survey 2 only 26/59, while the numbers were also more similar across the sites capturing them in S1, but a lot more varied in S2. My working theory is that the encounter rate variance is obviously the reason for the significantly higher imprecision of CTDS in S1, and that CTDS is more responsive to this overdispersion than REM, but I am unsure why exactly. My initial thinking was that the number of observations must cause greater variability in CTDS than REM, on account of REM only using first contacts, whereas baboons staying in front of CTs for long periods in CTDS will accumulate more observations with every passing snapshot interval/moment – something that would not occur with REM. Does this make sense as a possible reason for such a notable difference? Or is there another additional or alternative possibility that I have not thought of?
Obviously the group-living nature of chacma baboons combined with using CTs severely violates independence and creates greater overdispersion, so I have used QAIC, but I guess where the overdispersion is so severe the encounter rate variance will remain high regardless of whether the correct model was selected or not.
I also noted that for both surveys the CTDS estimate was higher than using REM, with effective detection angle smaller in both CTDS surveys than the REM survey, while detection distance was slightly smaller in REM too. These were calculated from the relative data sets, but I was thinking that perhaps these should both be consistent – and perhaps should come from the REM data as these would theoretically be the animals ‘triggering’ the camera, and thus those that would define the detection zone more accurately, while the CTDS data would have all observations at every snapshot, which would include many individuals that were not ‘effecting’ the detection zone. Is that sound thinking and justifiable? I had already used the same activity calculation using the contacts only because using all observations made very unrealistic activity results (i.e 100% or 3%).
Many thanks in advance for your time to have a read of this!
Jamie