Hi Mattia,
Thanks for this. I see my suggestion was a bit hasty. I should have said theta0 instead of b0, and then I think we are on the same page.
Using a simpler case: if one has rho[1:n] following dcar with zero_mean=TRUE, then often one has also something like:
eta[i] <- intercept + <other linear terms> + rho[i]
# followed by use of a function of eta[i] (perhaps through a link function or other step) as a parameter of a data distribution.
In that case, since mean(rho[1:n]) will be held at 0, the intercept is also identifiable.
If one has the same setup but uses zero_mean=FALSE, then if one considers adding a constant c to intercept and subtracting the same constant c from all rho[i], one gets an equivalent model in terms of downstream likelihoods (although perhaps with different prior probabilities). For this reason, both the intercept and the rho[i] may look poorly mixed and wander randomly such that intercept + mean(rho[1:n]) is well identified because as the intercept wanders to higher values, mean(rho[1:n]) wanders to correspondingly lower values
In such a situation, it is actually reasonable to run the MCMC anyway and then post-process the samples to obtain the posterior of interest. That would be, for each i, setting intercept_prime = intercept + mean(rho[1:n]) and rho_prime[i] = rho[i] - mean(rho[1:n]), where the "prime" names are the identifiable quantities.
In your situation, I was trying to apply the same logic, but I looked at b0 as the relevant intercept, but now I realize it is theta0, and also theta1 which makes the problem also different in another way. Let's say you just had the theta0 part (as in the first code you sent). Then I think adding a constant c to theta0 and subtracting the same constant c from all rho[i] would give an equivalent model for downstream likelihoods (e.g. for y[i]). So you could post-process by setting theta0_prime = theta0 + mean(rho[1:n]) and rho_prime[i] = rho[i] - mean(rho[1:n]).
Curiously, you would then also need to do similar steps for theta1 using 2*mean(rho[1:n]).
Also I guess what you really need is mean(rho[cell[1:n]]), i.e. the average over the nested rho indices from cell[], since I don't know if every cell is represented exactly once. That won't work directly in model code but you could do it in a nimbleFunction or after the MCMC (if you are monitoring rho).
If I follow that last code you sent, that might be doing essentially what I'm saying here. The code you sent that makes the figures seems to omit the actual rho[i] values, and I think you intended that for prediction purposes. If so, that matches treating theta0 + mean(rho[1:n]) as the equivalent theta0 you would obtain if the rho[1:n] were kept centered.
Perry