I am trying to obtain density estimates for a large number of point survey sites for certain bird species. My hope is to obtain one density estimate per site in order to compare the amount of expected birds between each site. Each site has 1 or 2 visits, where point counts were performed with distance, size, and covariates noted.
I am using the ds() function to obtain a distance model using CDS and MCDS loops. I also created a null model with no adjustments or covariates. Region.Label is the identifier for the individual survey points. I also included survey points with 0 observations of the species of interest and an Effort column that reflects the amount of visits each point received (1 or 2). The area column is set to 0, since I'm looking at density rather than abundance for now.
The best model by far, based on AIC, is a hazard-rate model using group size as the sole covariate (AIC = 1771.35). However, when looking at the density estimates I realized that they are extremely high compared to literature values (by at least one order of magnitude).
Furthermore, I realized that the null model with no adjustments or covariates had much more accurate density estimates, even though it is a worse model based on AIC and goodness-of-fit (AIC = 2004.737).
I'm wondering if someone could give me some guidance as to why the estimates are so inflated and if there is anything I can do to fix this. The fact that they're so different makes me think I'm doing something wrong when introducing the covariates but I'm not seeing where.
Here are the two ds() models I am using:
null:
m.null.hn <- ds(data = d,
formula = ~1,
transect = "point",
key = "hn",
order = 0,
er_var = "P3",
truncation = "15%",
convert_units = conversion.factor)
covariate model:
mbest <- ds(data = d,
formula = ~as.factor(size.factor),
transect = "point",
key = "hr",
order = 0,
er_var = "P3",
truncation = "15%,
convert_units = conversion.factor)
conversion.factor = 0.01 (since distance is measured in meters and I want density by hectare)