Hello,
I am analysing some long-eared owl survey data for my colleagues. They have three surveys per site and a removal design: once an owl is detected, the site receives no further visits in that breeding season.
When we wrote the AHM1 book, we discovered that a removal design occupancy model can be analysed "directly". That is, we simply analyse the detection-nondetection array where there are missing values after each first detection. This is conceptually simpler
than the typical alternative, which is by use of a categorical distribution with cell probs constructed in the (1-p1)(1-p2)p3 manner (for a site with a 0-0-1 history).
However, because I had some vague doubts, and because simulation is so easy, and fun, I just conducted a test where I simulated 100,000 data sets with 1000 sites, 5 occasions, and with constant psi = 0.4 and p = 0.3. I then made a copy of each data set and
turned that into a removal design, by changing into NAs all data after the first detection. Then, I analysed both data sets in unmarked with an intercepts-only static occupancy model (code can be obtained on request).
The pictorial results (showing the frequency distribution of the 10^5 estimates) are here; red shows true values, blue mean and black median:
Looking at the bottom-left plot, there may be a hint of a positive bias ... perhaps ? The tabular comparison is here:
truth mean.mle.fullData mean.mle.remData
psi 0.4 0.4005170 0.4022674
p 0.3 0.2998944 0.2999015
This seems to suggest to me the absence of any bias.
I share all of this, since:
-
some may have doubts about how to analyse removal design occupancy models (it's totally easy) and
-
perhaps my simulation-based, brute-force solution to my question does not stand up scrutinity by exact analytics ... (though, I now doubt there is a problem)
Thanks and best regards --- Marc