Hi all,
I was reading the tutorial on MLE with NIMBLE and wondered if it's possible to estimate a derived quantity in this framework?
For example, I have this model:
# ---- Likelihoods ----
for(i in 1:n) {
# Model 1 likelihood
occurrence[i] ~ dbern(p1[i])
logit(p1[i]) <- int_m1 + (beta_m1_year * year[i])
occurred[i] ~ dbern(p1[i])
# Model 2 likelihood
count[i] ~ dpois(lambda2[i] * occurred[i])
log(lambda2[i]) <- int_m2 + (beta_m2_year * year[i])
}
And right now, using MCMC, I can easily calculate this derived quantity, below, too.
# ---- Derived quantities ----
for(t in 1:nyears){
logit(p[t]) <- int_m1 + (beta_m1_year * years[t])
log(lambda[t]) <- int_m2 + (beta_m2_year * years[t])
avg_abd[t] <- p[t] * lambda[t]
}
Might this derived-quantity-estimation be possible with the NIMBLE MLE approach?
This is a simple example of a much more complex model I'm working on (including random effects with hundreds of levels), which I need to run for 40 species across 13 sites (520 models), and so the classic Bayesian approach would take too long. I'm also considering trying out nimbleHMC to see if that makes the model more feasible in terms of run time, or maybe switching to stan, which I haven't used. Also considering the subsampling approach from
King et al. 2022. Not sure which direction has the best chances of letting me custom-code a model and also have derived quantities. Any advice would be much appreciated before I go down one of these rabbit holes!
Many thanks,
Gates