the Distance package
, help us understand the estimates you wish to obtain. Figures are useful in understanding the design of your experiment.Eric thank you very much for your detailed reply. I really appreciate it.
Yes, the depiction is accurate (thanks for drawing it), and I do have enough detections to fit separate detection functions for each one of the 3 rounds of surveys, which would give me estimates for the 2 levels of my Region.Label (i.e. control/treatment sites). That’s what you mean right? Not 6 detection functions (i.e. one for each treatment type within a round of survey)?
Answering your question about how I expect the treatment to manifest. I expect the effect to be persistent (affecting both 2 and 3). In that case, should I fit the detection function for survey 2 and 3 combined, and do post-stratification? Or would you still simply fit a separate detection function for each round of survey?
PS: Thanks for sending that reference, I’ll look into that approach as well.
Thanks,
Fernanda.
Yes, that is correct. Jo, more details below, which will be more than most will want to know!
You can consider how a count at a given point would be converted to a density estimate. By dividing the point count by the effective area, you get estimated density. Another way to view this is to divide the point count by estimated probability of detection, which gives an estimate of abundance at the circle of radius w about that point, where w is the truncation distance – detected animals beyond w are not included in the count. Then to get from that abundance estimate to a density estimate, you divide by the area of the circle. At least this is the case when the point is surveyed just once. If it is visited say t times, you would divide the count by t, to get the mean count per visit.
The issue now is that we don’t have a good error model for the estimated densities at each point, but we do have suitable models for counts – e.g. Poisson or negative binomial. So instead of taking the estimated density as the response, we take the count, and put the terms to convert it to a density estimate onto the right hand side of the equation model, as a so-called offset. This only works if we us a GLM with a log link function - and the offset is actually the log of the terms taken onto the RHS, and is included in the exponent: E(n)=exp(linear predictor + offset)). In most applications, the offset would be known, but we only have an estimate of it. We handled that by propagating the uncertainty in estimating probability of detection through to the count model, using a bootstrap. Mark Bravington came up with a more sophisticated and less computer-intensive way to achieve the same thing. Or you can be Bayesian, and estimate the offset along with the count model parameters in a single step.
Steve
To view this discussion on the web visit https://groups.google.com/d/msgid/distance-sampling/DBAPR06MB6694677B03AA794562AEE66EEA809%40DBAPR06MB6694.eurprd06.prod.outlook.com.