85% CRIs using ubms package?

105 views
Skip to first unread message

Josh Barry

unread,
Aug 30, 2023, 1:50:10 PM8/30/23
to unmarked
Dear all,

I want to extract the 85% CRIs from a single season occupancy model with package ubms (the Bayesian 'sister package' of unmarked). The model output provides the 95% CRIs. I attempted to use the confint() function which works with unmarked, but it doesn't seem to work with ubms. Does anyone know how I can easily extract the 85% CRIs with ubms?

Unmarked  example:
unmarked_mod_1 <-occu(~ date ~Year , data= data1)
confint(unmarked_mod_1, type="state", level = 0.85)

ubms example (same model but with site-level random effect):
  ubms_mod_2  <-stan_occu(~date ~ Year + (1|Site), data=data1,
                                     chains = 5, iter = 10000, cores = 2, seed = 123) 

Thanks for any insights.

Kind regards -- Josh

Ken Kellner

unread,
Aug 30, 2023, 2:06:42 PM8/30/23
to unma...@googlegroups.com
ubms doesn't support the confint method, I should probably add something like that.

Probably the easiest way to get what you want is to just get the full set of posterior samples for all the parameters and calculate the desired CI yourself. For example, to get the state submodel intercept, it would be something like

post <- extract(unmarked_mod_1)
names(post)
beta_state <- post$beta_state
dim(beta_state)
int <- beta_state[,1] # columns in same order as output summary
quantile(int, c(0.075, 0.925))

Ken

On Wed, Aug 30, 2023 at 10:50:10AM -0700, Josh Barry wrote:
> Dear all,
>
> I want to extract the 85% CRIs from a single season occupancy model with
> package ubms (the Bayesian 'sister package' of unmarked). The model output
> provides the 95% CRIs. I attempted to use the confint() function which
> works with unmarked, but it doesn't seem to work with ubms. Does anyone
> know how I can easily extract the 85% CRIs with ubms?
>
> *Unmarked example:*
> unmarked_mod_1 <-occu(~ date ~Year , data= data1)
> confint(unmarked_mod_1, type="state", level = 0.85)
>
> *ubms example (same model but with site-level random effect):*
> ubms_mod_2 <-stan_occu(~date ~ Year + (1|Site), data=data1,
> chains = 5, iter = 10000, cores = 2,
> seed = 123)
>
> Thanks for any insights.
>
> Kind regards -- Josh
>
> --
> *** Three hierarchical modeling email lists ***
> (1) unmarked (this list): for questions specific to the R package unmarked
> (2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
> (3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
> ---
> You received this message because you are subscribed to the Google Groups "unmarked" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/unmarked/780058dd-2637-4a98-9b4f-cc6303a4fa4fn%40googlegroups.com.

Message has been deleted

Josh Barry

unread,
Aug 30, 2023, 5:20:51 PM8/30/23
to unmarked
Thank you Ken, simple enough!

Josh 

Marc Kery

unread,
Sep 1, 2023, 4:30:55 AM9/1/23
to unmarked
Dear Josh,

just wondering: why would one want to use 85% instead of the good old 95% C(R)Is ?

Thanks and best regards  --- Marc


From: unma...@googlegroups.com <unma...@googlegroups.com> on behalf of Josh Barry <joshua.mic...@gmail.com>
Sent: Wednesday, August 30, 2023 22:59
To: unmarked <unma...@googlegroups.com>
Subject: Re: [unmarked] 85% CRIs using ubms package?
 
Thanks Ken, simple enough!

Josh 

On Wednesday, August 30, 2023 at 1:06:42 PM UTC-5 Ken Kellner wrote:

Eduardo Silva

unread,
Sep 2, 2023, 3:07:52 PM9/2/23
to unma...@googlegroups.com
Hi Mark,

I've just come across an upcoming article that suggests using an 85% CI either instead of or in addition to the 95% confidence interval. The authors argue that the 85% CI is a better way to describe uncertainty in models selected using AIC. You can find the article here:


I imagine that this will become a topic of discussion.

Best

Eduardo A. Silva-Rodriguez, Med.Vet., PhD

Facultad de Ciencias Forestales y Recursos Naturales
Universidad Austral de Chile


Marc Kery

unread,
Sep 12, 2023, 7:43:16 AM9/12/23
to unma...@googlegroups.com
Dear Edoardo,

thanks for the info. Seeking agreement with model selection decisions based on AIC appears like a respectable motivation to deviate from an old custom.

Best regards  --- Marc


From: unma...@googlegroups.com <unma...@googlegroups.com> on behalf of Eduardo Silva <eduard...@gmail.com>
Sent: Saturday, September 2, 2023 20:25
To: unma...@googlegroups.com <unma...@googlegroups.com>

Rob Robinson

unread,
Sep 12, 2023, 8:28:26 AM9/12/23
to unma...@googlegroups.com
also, FWIW, all the confidence limits on our long-term trend data pages (http://data.bto.org/trends_explorer/) are 85% since for any given pair of years these don't overlap with approx. 95% probability, which I think is a rearrangement of the argument in that paper, but which might be a consideration in other studies with either multiple years (or sites)?
Cheers
Rob

*************** Learn about Britain's Birds at www.bto.org/birdfacts  ******************

Dr Rob Robinson, Associate Director - Research (he/him)
Hon Reader: Univ East Anglia | Visiting Researcher: Swiss Ornithological Institute
British Trust for Ornithology, The Nunnery, Thetford, Norfolk, IP24 2PU
Ph: +44 (0)1842 750050       T: @btorobrob
E: rob.ro...@bto.org      W: www.bto.org/about-bto/our-staff/rob-robinson

======== "How can anyone be enlightened, when truth is so poorly lit" ========


Jim Baldwin

unread,
Sep 12, 2023, 2:32:06 PM9/12/23
to unma...@googlegroups.com

Eduardo Silva mentioned that “I imagine that this will become a topic of discussion.”  As I’m not seeing a lot of discussion, I’m going to play Devil’s Advocate in an attempt to spice things up.

  1. Any threshold for significance that is chosen without respect to the subject matter and/or objective and/or consequences of decisions is at best arbitrary.  And that includes using 0.05 for testing significance and 95% for confidence intervals.  However, 0.05 and 95% are standards and deviations from those standards should be made explicit.  There’s a saying that goes something like “We must love standards because we have so many of them.”

  2. I don’t see that if one chooses 85% for confidence intervals for predictor coefficients that should imply that 85% should be used for predictions from the resulting equation.

  3. The referenced paper bases results on simulations with uncorrelated predictors for each simulation.  That is very atypical from any data I’ve seen.  Might the results be different if more real-world data structures are considered?  I can imagine that if two predictors are highly correlated, the top model could have both of those predictors being highly not significant and yet provide adequate predictions.

  4. If one has the good fortune of not needing to use AIC (and fitting just a single model), then are we back to using 0.05?

In short, decisions to keep or toss variables should involve subject matter knowledge, study objectives, and costs of collecting variables for predictions (among many other criteria) and not just an arbitrary significance percentage.

Corrections to any of my statements are welcome and encouraged (as I’d like to adjust my inferences so I don’t keep spreading wrong ideas.)

 

Jim


John C

unread,
Sep 12, 2023, 4:59:05 PM9/12/23
to unmarked
Hi all, probably muddying the discussion further with a few JMO's, but...

--I think it's reasonable to report 90 or 85 % CRI or PRI because the tails of the posterior distribution can be unstable without a ton of effective samples. IIRC, Mike Meredith had a post that described/demonstrated this very clearly on his blog (which I can't seem to find), but the underlying machinery for ubms (stan) defaults to 90% credible and posterior predictive intervals partially for this reason.

--I don't think the authors in the linked paper were arguing that 85% *credible* (vs. confidence) intervals should be used to capture unimportant terms, and I might be careful about this. Which % interval might make most sense, if any, probably depends on the priors. With very narrow priors, one could have many CRI that don't overlap 0 but represent unimportant terms with small effect sizes. Same thing would apply if fitting a penalized likelihood and looking at the confidence intervals. FWIW, I'm pretty sure I've used an AIC argument to justify 85% CRI in previous work, so whether this is broadly confusing or not...well...it confused me.

--Jim, your point #3 makes sense to me. Maybe in practice, people using IC are more likely to get rid of very correlated predictors before model fitting than those using (e.g.) LASSO? Either way, sensitivity to predictor covariance is an interesting question. (I guess I can also imagine it being challenging to summarize cleanly in a paper.)


Cheers,

John

Marc Kery

unread,
Sep 13, 2023, 1:39:24 AM9/13/23
to unmarked
Dear all,

I find it stunning that the first question about whether and how to do model selection is hardly ever made explicit. It is: "What are we building our model(s) for"? Is it for:
  • exploration, simplified description, pattern searching and hypothesis generation ?
  • inference about mechanisms ?
  • prediction of a process to new times or places ?
Tredennick et al. (2021) argue that depending on why you build a model or what you want to use if for, quite different model selection methods may be appropriate, and they may also lead to different models being selected. The same has been argued by others, including Shmueli (Stat.Sci, 2010). For instance, if you're really after mechanisms, represented by only a very small handful of models, then they suggest the 'dreaded' Null hypothesis significance testing as the main method of comparing and ultimately selecting models. In contrast, for prediction, in- or (better) out-of-sample predictive performance is naturally the key criterion. Finally, for exploration, anything goes, but the aims of such an analysis (i.e., NOT confirmation, NOT prediction) must always be openly declared.

I like the Tredennick paper very much, because it reminds us of something that perhaps most of us may feel to some degree, but hardly ever clearly think about and act accordingly.

OK, in real life perhaps the aims of model building in any given instance may overlap to some degree (which, though, they do concede in their Fig. 1) and it may also not always be so clear what the aim of a model is. However, trying to go in that direction and first trying to get an answer to "why am I doing this ?" may help making life easier in that 'Black hole of statistics' (see same paper for that quote).

Best regards  --- Marc


PS: BTW, if we're really about mechanisms, then we should FAR more often consider doing path analysis/structural equation modeling: most causal networks are just that, networks, i.e., reticulate meshes of cause and effect that point towards our response of interest. I feel we ought to represent that in our models far more commonly. This is really easy with hierarchical modeling and flexible software such as JAGS or NIMBLE. See for instance this early paper (https://esajournals.onlinelibrary.wiley.com/doi/full/10.1890/11-0258.1?utm_sq=gxexx7estg) and as a wonderful recent example this: https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.16482



From: 'John C' via unmarked <unma...@googlegroups.com>
Sent: Tuesday, September 12, 2023 22:59

Quresh Latif

unread,
Sep 13, 2023, 9:08:01 AM9/13/23
to unma...@googlegroups.com

Shameless self promotion – Here’s another recent paper that implements causal inference, this one with community occupancy as the primary ecological model: https://doi.org/10.1002/ecs2.4479.

 

Quresh S. Latif 
Research Scientist
Bird Conservancy of the Rockies

Phone: (970) 482-1707 ext. 15

www.birdconservancy.org

 

From: unma...@googlegroups.com <unma...@googlegroups.com> On Behalf Of Marc Kery
Sent: Tuesday, September 12, 2023 11:39 PM
To: unmarked <unma...@googlegroups.com>
Subject: Re: [unmarked] 85% CRIs using ubms package?

 

Dear all,

 

I find it stunning that the first question about whether and how to do model selection is hardly ever made explicit. It is: "What are we building our model(s) for"? Is it for:

  • exploration, simplified description, pattern searching and hypothesis generation ?
  • inference about mechanisms ?
  • prediction of a process to new times or places ?

Tredennick et al. (2021) argue that depending on why you build a model or what you want to use if for, quite different model selection methods may be appropriate, and they may also lead to different models being selected. The same has been argued by others, including Shmueli (Stat.Sci, 2010). For instance, if you're really after mechanisms, represented by only a very small handful of models, then they suggest the 'dreaded' Null hypothesis significance testing as the main method of comparing and ultimately selecting models. In contrast, for prediction, in- or (better) out-of-sample predictive performance is naturally the key criterion. Finally, for exploration, anything goes, but the aims of such an analysis (i.e., NOT confirmation, NOT prediction) must always be openly declared.

 

I like the Tredennick paper very much, because it reminds us of something that perhaps most of us may feel to some degree, but hardly ever clearly think about and act accordingly.

 

OK, in real life perhaps the aims of model building in any given instance may overlap to some degree (which, though, they do concede in their Fig. 1) and it may also not always be so clear what the aim of a model is. However, trying to go in that direction and first trying to get an answer to "why am I doing this ?" may help making life easier in that 'Black hole of statistics' (see same paper for that quote).

 

Best regards  --- Marc

 

 

PS: BTW, if we're really about mechanisms, then we should FAR more often consider doing path analysis/structural equation modeling: most causal networks are just that, networks, i.e., reticulate meshes of cause and effect that point towards our response of interest. I feel we ought to represent that in our models far more commonly. This is really easy with hierarchical modeling and flexible software such as JAGS or NIMBLE. See for instance this early paper (https://link.edgepilot.com/s/af092d14/uVThRmiEAU6mk_5FgBobZA?u=https://esajournals.onlinelibrary.wiley.com/doi/full/10.1890/11-0258.1?utm_sq=gxexx7estg) and as a wonderful recent example this: https://link.edgepilot.com/s/4c15bc85/dyNSZsMpxEGs_qQsKF0OwQ?u=https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.16482

 

 


From: 'John C' via unmarked <unma...@googlegroups.com>
Sent: Tuesday, September 12, 2023 22:59
To: unmarked <unma...@googlegroups.com>
Subject: Re: [unmarked] 85% CRIs using ubms package?

 

Hi all, probably muddying the discussion further with a few JMO's, but...

 

--I think it's reasonable to report 90 or 85 % CRI or PRI because the tails of the posterior distribution can be unstable without a ton of effective samples. IIRC, Mike Meredith had a post that described/demonstrated this very clearly on his blog (which I can't seem to find), but the underlying machinery for ubms (stan) defaults to 90% credible and posterior predictive intervals partially for this reason.

 

--I don't think the authors in the linked paper were arguing that 85% *credible* (vs. confidence) intervals should be used to capture unimportant terms, and I might be careful about this. Which % interval might make most sense, if any, probably depends on the priors. With very narrow priors, one could have many CRI that don't overlap 0 but represent unimportant terms with small effect sizes. Same thing would apply if fitting a penalized likelihood and looking at the confidence intervals. FWIW, I'm pretty sure I've used an AIC argument to justify 85% CRI in previous work, so whether this is broadly confusing or not...well...it confused me.

 

--Jim, your point #3 makes sense to me. Maybe in practice, people using IC are more likely to get rid of very correlated predictors before model fitting than those using (e.g.) LASSO? Either way, sensitivity to predictor covariance is an interesting question. (I guess I can also imagine it being challenging to summarize cleanly in a paper.)

 

 

Cheers,

 

John

 

On Tuesday, September 12, 2023 at 12:32:06 PM UTC-6 Jim Baldwin, USDA Forest Service wrote:

Eduardo Silva mentioned that “I imagine that this will become a topic of discussion.”  As I’m not seeing a lot of discussion, I’m going to play Devil’s Advocate in an attempt to spice things up.

  1. Any threshold for significance that is chosen without respect to the subject matter and/or objective and/or consequences of decisions is at best arbitrary.  And that includes using 0.05 for testing significance and 95% for confidence intervals.  However, 0.05 and 95% are standards and deviations from those standards should be made explicit.  There’s a saying that goes something like “We must love standards because we have so many of them.”
  2. I don’t see that if one chooses 85% for confidence intervals for predictor coefficients that should imply that 85% should be used for predictions from the resulting equation.
  3. The referenced paper bases results on simulations with uncorrelated predictors for each simulation.  That is very atypical from any data I’ve seen.  Might the results be different if more real-world data structures are considered?  I can imagine that if two predictors are highly correlated, the top model could have both of those predictors being highly not significant and yet provide adequate predictions.
  4. If one has the good fortune of not needing to use AIC (and fitting just a single model), then are we back to using 0.05?

 

In short, decisions to keep or toss variables should involve subject matter knowledge, study objectives, and costs of collecting variables for predictions (among many other criteria) and not just an arbitrary significance percentage.

Corrections to any of my statements are welcome and encouraged (as I’d like to adjust my inferences so I don’t keep spreading wrong ideas.)

 

Jim

 

On Tue, Sep 12, 2023 at 5:28 AM Rob Robinson <rob.ro...@bto.org> wrote:

also, FWIW, all the confidence limits on our long-term trend data pages (https://link.edgepilot.com/s/a0d53095/M0zdVmCj7UGOUZg_tFe-TA?u=http://data.bto.org/trends_explorer/) are 85% since for any given pair of years these don't overlap with approx. 95% probability, which I think is a rearrangement of the argument in that paper, but which might be a consideration in other studies with either multiple years (or sites)?

Cheers

Rob

 

*************** Learn about Britain's Birds at https://link.edgepilot.com/s/f5e7e94c/J2RSnS2QqUqZr-hISr9R8g?u=http://www.bto.org/birdfacts  ******************

Dr Rob Robinson, Associate Director - Research (he/him)
Hon Reader: Univ East Anglia | Visiting Researcher: Swiss Ornithological Institute
British Trust for Ornithology, The Nunnery, Thetford, Norfolk, IP24 2PU
Ph: +44 (0)1842 750050       T: @btorobrob



======== "How can anyone be enlightened, when truth is so poorly lit" ========

On Tue, 12 Sept 2023 at 12:43, Marc Kery <marc...@vogelwarte.ch> wrote:

Dear Edoardo,

 

thanks for the info. Seeking agreement with model selection decisions based on AIC appears like a respectable motivation to deviate from an old custom.

 

Best regards  --- Marc

 


From: unma...@googlegroups.com <unma...@googlegroups.com> on behalf of Eduardo Silva <eduard...@gmail.com>
Sent: Saturday, September 2, 2023 20:25
To: unma...@googlegroups.com <unma...@googlegroups.com>
Subject: Re: [unmarked] 85% CRIs using ubms package?

 

Hi Mark,

 

I've just come across an upcoming article that suggests using an 85% CI either instead of or in addition to the 95% confidence interval. The authors argue that the 85% CI is a better way to describe uncertainty in models selected using AIC. You can find the article here:

 

 

I imagine that this will become a topic of discussion.

 

Best

 

Eduardo A. Silva-Rodriguez, Med.Vet., PhD

Facultad de Ciencias Forestales y Recursos Naturales

Universidad Austral de Chile

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

--
*** Three hierarchical modeling email lists ***
(1) unmarked (this list): for questions specific to the R package unmarked
(2) SCR: for design and Bayesian or non-bayesian analysis of spatial capture-recapture
(3) HMecology: for everything else, especially material covered in the books by Royle & Dorazio (2008), Kéry & Schaub (2012), Kéry & Royle (2016, 2021) and Schaub & Kéry (2022)
---
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.

Jim Baldwin

unread,
Sep 19, 2023, 6:10:41 PM9/19/23
to unma...@googlegroups.com
I'll stop after this.

The referenced paper gives a good (meaning easy-to-understand) example showing there's a relationship between delta AIC values and P-values (which are both certainly used in analyses using unmarked and lots of other analyses - I say that to help justify the following comments in this forum which many times are more about the mechanical use of unmarked).  If one blindly uses 2 for the delta AIC value, then that corresponds to around a 0.15 P-value for selecting variables.  But a threshold of 2 is completely arbitrary so it does not justify using 0.15 as a P-value threshold.  And it certainly doesn't justify 85% confidence intervals for parameters or predictions over any other values.  If one started with a P-value of 0.05, then that would correspond with a smaller than 2 delta AIC value.  Again, both are arbitrary but maybe commonly used standards.

So a conclusion is that there is roughly an equivalent selection P-value for every delta AIC threshold and vice versa.  But that relationship doesn't make either one less arbitrary (without also dealing with the consequences of making wrong or right decisions about which variables to keep or if the resulting model is any good and the rationale that Marc Kery gave in his response).

Jim


Reply all
Reply to author
Forward
0 new messages