Clarification regarding K value in N-mixture pcountOpen

32 views
Skip to first unread message

sreeja Rachaveelpula

unread,
Jan 2, 2026, 9:07:17 AM (8 days ago) Jan 2
to unmarked
Hello Everyone,  

I want to do N-mixture pcount open with data of colonial nesting waterbird species, whose count is as high as 4600 individuals in some sites (screenshot of data attached below) How do I maintain the 'K' value in such case? Even if I remove the outlier site, there are also other sites that are having counts as high as 1500, so the K value is so high that it is crashing the data. Can anyone suggest a solution to this issue or suggest alternative time series models? I don't wish to use occupancy, using the abundance is important. I have data for 10 years as primary data with 6 months within each year as secondary data, fixed site covariates, yearly site covariates and observation covariates. 

I also considered the use of binomial thinning of the count, as my main aim is to understand the variables influencing the trend rather than the actual predicted abundance. Can you please let me know if this is accepted or giving a solution to reduce the K value. 

Thanks & regards
Sreeja 

count_data_raw.jpg

Jeffrey Royle

unread,
Jan 2, 2026, 9:54:24 AM (8 days ago) Jan 2
to unma...@googlegroups.com
Dear Sreeja,
  I would highly recommend using the N-mixture model (in any form) for this particular data set. It is evident by looking at the data that the 2 key assumptions are badly violated (Poisson abundance and binomial sampling).
  If the assumptions were reasonable in a situation where you had such high counts, then you would have to set K to be such a huge number that the model would probably never run effectively in unmarked. 
 I'm afraid you're going to have to consider alternative modeling frameworks.
regards
andy


--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/unmarked/d955dfb7-a50b-4262-a7f9-37c8ff06f690n%40googlegroups.com.

Jim Baldwin

unread,
Jan 2, 2026, 12:07:35 PM (7 days ago) Jan 2
to unma...@googlegroups.com
I'm not all disagreeing with Andy's advice (i.e., don't use "the N-mixture model (in any form) for this particular data set.") but wanted to elaborate on Andy's second sentence:  "If the assumptions were reasonable..."

If you're willing and able to write your own code (most likely with code where you can easily increase the numerical precision of the calculations such as Mathematica or Maple), then the paper Maximum likelihood estimation for N‐mixture models - Haines - 2016 - Biometrics - Wiley Online Library shows how to avoid dealing with K.  (But typically one would want to use the tried-and-true code in unmarked.)

Jim


sreeja Rachaveelpula

unread,
Jan 3, 2026, 12:10:58 AM (7 days ago) Jan 3
to unmarked
Thank you for the advice Andy. I am trying to do TMB with State space model and hope that works. 
Reply all
Reply to author
Forward
0 new messages