Jim, I would re-run your model without any adjustment terms (manual selection of adjustments, then specify zero). This ensures that the fitted curves are non-increasing. If AIC gets appreciably worse, that would suggest an assumption failure. For example, you may have some avoidance of the line (in which case the above should ‘average out’ the shortage of detections near the line with the excess away from the line), or animals close to the line are being missed (as occurs for example with aerial surveys; left-truncation may work in this case).
Steve Buckland
--
You received this message because you are subscribed to the Google Groups "distance-sampling" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
distance-sampl...@googlegroups.com.
To post to this group, send email to
distance...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/distance-sampling/40940b25-b404-4cc0-bacf-a7493fe0fd80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
>Thanks, Steve. Yes, removing the cosine adjustment (/Adjust=CO) keeps things below 1.0. I'll probably fit the series of data sets I have >with the cosine adjustment and without the cosine adjustment and pick the smallest AICc except when any of the estimates exceed >1.0. Does that sound too convoluted?
That should be OK. Personally, I’m not a fan of using adjustment terms in mcds. They are there to put ‘wiggles’ in the detection function, if one of the key functions cannot fit the data well on its own, so giving greater flexibility. Modelling covariates in the detection function also gives greater flexibility, and it’s not often that you get a big improvement in fit by including adjustments in addition to covariates, unless there are problems with the data (e.g. substantial rounding, responsive movement, etc).
>I'm assuming that while the estimation procedure does force the probability to be 1.0 a zero distance, the likelihood (and therefore the >AICc) is based on a general curve fitting procedure rather than restricting the maximum estimate to be no greater than 1.0 (especially >when using the cosine adjustment).
If you have no adjustment, this cannot occur, because the models for the detection function are non-increasing. However, once you include adjustments, there is no guarantee that the fitted probability density function is non-increasing. Because we obtain the fitted detection function by scaling the density function so that it is one at zero distance, that detection function might then increase above one. The cds engine of Distance deals with this by incorporating a penalty term in the likelihood, that results in the fitted function being (almost) non-increasing. The mcds engine does not have this feature.
Steve
Sarah, I suspect you’re just mis-interpreting the plots. The detection function is the smooth curve, and should never exceed 1. It should be exactly one at zero distance. The histogram bars are NOT estimates of detection probability, but are there to help judge whether the fitted detection function fits the data well.
(If you add adjustment terms, and don’t impose a monotonicity constraint, it is possible for the cure to go above one, and would probably indicate a problem with the data.)
Steve