On the same model as my previous post, some NIMBLE code get to generate chains that converge to a different point depending on the initialization of some parameters while as I understand it these parameters are first updated from their full conditionnal posterior distribution so the initialization shouldn't change anything :
MODEL:
I'm working on CMR with a latent multinomial model dealing with
misidentifications where latent histories are filled with 0 (non
captured), 1 (captured and correcly identified) and 2 (captured and
misidentified) and the vector x (that counts the number of times each
latent history has been seen) follows a multinomial.
The parameters are the capture probability "p" and the identification probability "a".
METROPOLIS ALGORITHM:
1) update p and a (sample them from their posterior conditionnal on x)
-> That step does not depend on the initial values of p and a
2.1) sample x_prop
2.2) calculate metropolis ratio r = f(x_prop | p, a) / f(x | p, a)
2.3) accept xprop with proba r
-> That step depends only on the values of p and a that were gererated previously, not on the initialization, right ?
So my question is how is it possible that if I change the initial values of p and a I give to nimble the chains change their convergence distribution ? (The initialization of x does not change anything, all chains converge toward the same posterior for identical initialization of p and a)
Is there something I misunderstood about the initializations or values used inthe algorithm (like the r ratio that would be calculated with the values of previous iteration and not the actual values of the model) ?
Thank you,
Best regards,
Rémi