Unusually low detection probability obtained from pcount data?

359 views
Skip to first unread message

Avery Driscoll

unread,
Jul 27, 2017, 6:50:14 AM7/27/17
to unmarked
My name is Chris, a biologist just learning to use unmarked.  I have 3 replicated visits to 19 point count locations, in this case for Bairds Trogon in Costa Rica.  The data looks like this:
     X Occasion1 Occasion2 Occasion3
1  N01         1         0         0
2  N02         0         2         0
3  N03         1         1         0
4  N04         0         0         0
5  N05         0         1         0
6  N06         0         0         0
7  N07         1         1         0
8  N08         0         1         0
9  N09         0         1         1
10 S01         1         1         1
11 S02         0         0         1
12 S03         0         0         0
13 S04         1         1         0
14 S05         0         0         0
15 S06         0         0         1
16 S07         2         0         0
17 S08         0         1         0
18 S09         1         0         0
19 S10         0         0         0

batrog.frame <- unmarkedFramePcount(y=batrog.y)
 mNull <- pcount(~1 ~1,, data= batrog.frame)
backTransform(mNull, type="det")
Backtransformed linear combination(s) of Detection estimate(s)

 Estimate      SE LinComb (Intercept)
  0.00573 0.00635   -5.16         
  1
I ran a model with pcount function, using an intercept only model for both state and detection.  It is estimating 67 trogons per point (which have 100 m radius), with a detection probability of .005!  I have worked through every example I can find online and can't find any mistakes in my short bit of code. My mentors all say this detection probability is way too low for this sort of data, and my density should be around 1-5 birds max.  Any ideas what I am doing wrong? Thanks!
BATROG.csv

JOSE JIMENEZ GARCIA-HERRERA

unread,
Jul 27, 2017, 7:35:47 AM7/27/17
to unma...@googlegroups.com

Hi Chris,

 

In my opinion, to fit a N-mix with your data need more sites and/or use covariates. If you try to “reduce” your data (something like y[y>1]<-1) to make an occupancy model, you will obtain a high occupancy and a realistic detection probability of 0.351 (SE:0.0644).

 

Best,

Jose

--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Kery Marc

unread,
Aug 12, 2017, 7:29:34 AM8/12/17
to unma...@googlegroups.com
In addition, your Nmix model may not be identifiable for your data set: try the "series of K" criterion: repeat the analysis with, say, K = 100, 1000 and 5000 and see whether you get stable estimates. If not, you are out of luck since then the MLEs for abundance and detection are infinity and zero.

Marc

From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of JOSE JIMENEZ GARCIA-HERRERA [Jose.J...@uclm.es]
Sent: 27 July 2017 13:35
To: unma...@googlegroups.com
Subject: RE: [unmarked] Unusually low detection probability obtained from pcount data?

Jim Baldwin

unread,
Aug 12, 2017, 4:17:39 PM8/12/17
to unma...@googlegroups.com
Just to second Marc Kery's observation that the MLE's for abundance and detection are infinity and zero.

Because the counts are low in this data (0, 1, and 2) one can construct the exact log of the likelihood:

logL = -18 p (3 + (-3 + p) p) lambda - Log[4] + 26 Log[1 - p] + 
           22 Log[p] + 16 Log[lambda] + 4 Log[1 - (-1 + p)^3 lambda]] + 
           Log[1 + (-1 + p)^3 lambda (-3 + (-1 + p)^3 lambda)]

All of the solutions for when the partial derivatives of logL with respect to lambda and p equal zero do not satisfy 0 < p < 1.  Therefore the MLE is on the "boundary" which in this case is infinity and zero for lambda and p, respectively.  (I've attached the Mathematica code for constructing the log likelihood and finding the solutions for when the partial derivatives equal zero.

If you had a justifiable prior on lambda and p, you could do better (but maybe not much better).

Jim 

To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.


For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.
pcount example.nb

Jim Baldwin

unread,
Aug 13, 2017, 12:36:27 AM8/13/17
to unma...@googlegroups.com
I've attached the algebra used in a PDF document which should be more useful.

Jim
Constructing the likelihood.pdf

Jim Baldwin

unread,
Aug 13, 2017, 2:57:33 AM8/13/17
to unma...@googlegroups.com
Sorry, I had a couple of typos.  (I shouldn't do this late at night.)  Attached is the updated PDF.  (If anyone is interested, I can send a more compact Mathematica notebook.)

Jim
Constructing the likelihood.pdf

Chris Smith

unread,
Sep 22, 2017, 12:08:56 PM9/22/17
to unmarked
This is Chris (the original poster of the data) again.  Thanks so much for everyones comments, they have been extremely helpful for me....indeed, changing the K criterion increased my estimates with no sign of them stabilizing.  I am not super familiar with how to use the Mathematica code, but I really appreciate the effort (it appears it took a lot of time).  I am starting the final analysis for this data and had 2 further questions.  I have run several other species with similar detection histories, and Jose's approach to using occupancy seems to generate consistent reasonable detection probabilities.  My 2 questions are:

1.Is the method and output for estimating detection probability using occupancy (occu()) the same as estimating the detection probability for the abundance estimates (pcount)...both binomials with similar estimates?
2. Is it possible to set detection probability within the pcount framework? (There is work from nearby that has generated detection probabilities using similar survey methods for these species, or I was considering using detection probability from the occupancy work?). My reasoning for doing this was I wondered if it was just the mis-calculated low capture probability that was making the abundances so high.

I also ran into a strange twist on the original problem with another, similar set of data (see below....Black-cheeked Antthrush).  I found the estimates stabilized at 17.8 birds whether K was 100 or 10000, but that detection probability was 0.02, which still seems incredibly low for this data.  I have posted the data below and attached it as well for easy import if anyone wants.  I was wondering why this estimate seems to have stabilized, but is still putting out such a low capture probability (maybe my general sense of what the capture probability should be is off??)?
Point Occasion1 Occasion2 Occasion3
N01 0 1 0
N02 0 1 1
N03 0 1 0
N04 1 1 1
N05 1 0 0
N06 0 2 2
N07 0 0 0
N08 2 0 0
N09 1 0 1
N10 0 0 0
S01 1 0 0
S02 0 1 1
S03 1 1 0
S04 0 2 0
S05 0 0 0
S06 1 2 0
S07 0 1 1
S08 0 0 0
S09 0 0 0
ydata <- as.matrix(birddata[,c(2:4)])
UMF <- unmarkedFramePCount(y=ydata)
mNull <- pcount(~1 ~1, data= UMF, K=10000)  #or K=100
backTransform(mNull, type="state")
backTransform(mNull, type="det")

##Output
Backtransformed linear combination(s) of Abundance estimate(s)

 Estimate  SE LinComb (Intercept)
     17.8 141    2.88           1

Backtransformed linear combination(s) of Detection estimate(s)

 Estimate   SE LinComb (Intercept)
   0.0276 0.22   -3.56           1

Thanks so much for the help!
BlackCheekedAntthrush.xlsx

Jim Baldwin

unread,
Sep 23, 2017, 6:59:24 PM9/23/17
to unma...@googlegroups.com
In this case your data just barely obtains a unique maximum likelihood estimate.  ("barely obtains" is a term I just made up.)  One can see this with the large standard errors you see for lambda and p.  Also the estimated correlation between the estimators of lambda and p is nearly -1.0 which is not a good sign (estimate is -0.9997579).  In addition, the contour plot of the log likelihood surface indicates both the high correlation and large standard errors.  I know such data is expensive to collect but in this case it just isn't enough to obtain estimates of probably anyone's desired precision.

Here is some R code to find the correlation and produce the contour plot.

  mNull <- pcount(~1 ~1, data= UMF, K=10000, control=list(reltol=0.0000000001))  #or K=100
  lambda.hat = backTransform(mNull, type="state")
  p.hat = backTransform(mNull, type="det")

# Find correlation of the estimators of lambda and p
  covmat = solve(mNull@opt$hessian)
  covmat[1,2]/(covmat[1,1]*covmat[2,2])^0.5

# Contour plot of log likelihood surface
  p = 0.01 + (0.1-0.01)*c(0:250)/250
  lambda = 1 + (40-1)*c(0:250)/250
  logL = function(lambda,p) {
    -(-57*lambda*p + 57*lambda*p^2 - 19*lambda*p^3 - 3*log(2) - log(4) + 
    18*log(lambda) + 5*log(1 + lambda*(1 - p)^3) + log(2 + lambda*(1 - p)^3) + 
    log(2 - lambda*(-4 - lambda*(1 - p)^3)*(1 - p)^3) + 
    log(1 - lambda*(-3 - lambda*(1 - p)^3)*(1 - p)^3) + 26*log(1 - p) + 28*log(p))
  }
  z = outer(lambda,p,logL)
                                                                        library(devEMF)
  emf("c:\\users\\jim nelia\\Desktop\\logL contours.emf")
  contour(lambda,p,z,las=1,xlab="lambda",ylab="p",main="Log likelihood",
    levels=c(51.4,52,60,70,80,90)) 
   
# Add in the location of the maximum likelihood estimates
  points(lambda.hat@estimate,p.hat@estimate,pch=16,col="red")
  dev.off()
   
For counts that are small and where there are no continuous covariates the log likelihood can be written explicitly and relatively compactly (i.e., no need for approximating a sum).  I've also attached a figure showing the contour plot.

To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.
logL contours.emf
Reply all
Reply to author
Forward
0 new messages