Hi John,
Yes you understood perfectly what I was trying to explain. Thanks for your response. It has given me a great starting point to do some more reading to get a better understanding of over- and under-dispersion and how to deal with them.
Would you mind explaining what you meant by "aggregating across cells in the detection history", and what the implications of doing or not doing that may be? Sorry if that is a basic question: I have only just delved into occupancy analyses and haven't come across that yet.
I took your suggestion and tried using mb.gof.test on the same model and got this output, which suggests that the model is actually over-dispersed, rather than under-dispersed:
mb.gof.test(MAWA.elev,nsim=1000,plot.hist = TRUE)
MacKenzie and Bailey goodness-of-fit for single-season occupancy model
Pearson chi-square table:
Cohort Observed Expected Chi-square
000 0 1797 1795.98 0.00
001 0 22 15.65 2.58
010 0 12 15.65 0.85
011 0 24 22.39 0.12
100 0 14 15.65 0.17
101 0 13 22.39 3.94
110 0 14 22.39 3.14
111 0 64 49.91 3.98
Chi-square statistic = 14.778
Number of bootstrap samples = 1000
P-value = 0.011
Quantiles of bootstrapped statistics:
0% 25% 50% 75% 100%
0.15 2.88 4.46 6.72 18.99
Estimate of c-hat = 2.88
I also calculated the c-hat from the parboot() test that I described in my original post by using cHat_pb <- pb@t0 / mean(p...@t.star), and I got a result of c-hat = 0.72
So this leads me to wonder how these two measures of goodness of fit can result in such opposite values.
Your thoughts are greatly appreciated!
Thanks,
Jenna