Mattia, thanks for your persistence and use of NIMBLE. I tried running your model, and a few comments:
Noting that each lambda[i] is defined the same as exp(b0), so also each p.nb[i] term is exactly the same. I'm not sure if you meant it this way, or perhaps you're working up to a more complex model? But if not, the code could be simplified.
I took a look at the errors, and I think it just comes down to choices of initial values. I made a few changes:
The initial value for the p parameter, you had "rnorm(1,0,1)", but as p is a probability, this was often giving negative values for p, which led to an invalid model state. I changed the initial value for p to "p = runif(1, 0, 1)"
The initial value for eta, you had "rnorm(1, 0, 1)", and while valid, in many cases this was "too large" of a value, and the term from your model:
when eta was say -1, and a value of N is say 10, this would lead to L being on the order exp(10), which was way too large, and was causing a density evaluation for N (since N[i,t] ~ dpois(L[i,t])) of -Inf. So, we need a more "neutral" initial value for eta. I changed the initial value simply to 0, and we can let the MCMC explore from there. If you truly want a "random" initial value for eta, then I would recommend something much closer to 0, such as "rnorm(1, 0, 0.001)"
The same problem was happening in the initial values for both S and T. Their initial values, say for S:
S = rnorm(nrow(counts),0,pow(rtruncnorm(1,a=0,,mean=0,sd=1),-2)),
The problem is, when the truncnorm() function returns a small value, near to 0, say 0.1, then pow(0.1, -2) = 100 is the standard deviation for drawing the initial values for S (and same happening for T), and therefore the initial values of S (and T) are often on the order of 100, or 200, then these are also appear in the term:
L[i,t] <- step(N[i,t-1]-0.01) * exp(log(N[i,t-1]) + rho - eta*N[i,t-1] + S[i] + T[t]) + ...
and the exp(S), or exp(T) again gives a huge number, and would cause the density evaluation (the same problem as before) for N to be -Inf. So, I changed the initial values for S and T to have smaller standard deviations, to give smaller numbers for S and T:
S = rnorm(nrow(counts),0,1)
T = rnorm(ncol(counts),0,1)
All told, the new initial values list looks like:
inits <-list(N = (counts+1), b0 = rnorm(1,0,1),
p = runif(1, 0, 1), ##rnorm(1,0,1),
alpha = runif(1,0,50),
eta = 0, ##rnorm(1,0,1),
rho = rnorm(1,0,1),
S = rnorm(nrow(counts),0,1), ##pow(rtruncnorm(1,a=0,,mean=0,sd=1),-2)),
T = rnorm(ncol(counts),0,1), ##pow(rtruncnorm(1,a=0,,mean=0,sd=1),-2)),
sigma.S = rtruncnorm(1,a=0,,mean=0,sd=1), sigma.T = rtruncnorm(1,a=0,,mean=0,sd=1),
tau.S = pow(rtruncnorm(1,a=0,,mean=0,sd=1),-2), tau.T=pow(rtruncnorm(1,a=0,,mean=0,sd=1),-2))
And this seemed to work for me, and not produce any warnings or errors.
To diagnose these problems, you can look at the variables in the model, such as:
Rmodel$T
Rmodel$S
Rmodel$p
And equally useful, you can look at the density evaluations (log-probabilities) of stochastic model variables as:
Rmodel$logProb_y
Rmodel$logProb_N
So, that way you can figure out where the -Infs, or NaNs are coming from.
Hope this helps. Keep at it.
Daniel