Hi guys,
I am trying to use the mvnorm sampling to first sample x.mod from the given space (mean=mu.f; cov=Pf) and then create a likelihood comparing with the observations (y.censored) for those nodes that are not observed (with no observation) we used an H vector to only calculate the likelihood for those nodes that are observed. Currently, the Pf is just a diagonal matrix, each representing the variance of each node. However, with the increase of the Pf diagonal length (from 4 to 156), the unobserved sampled variable within the x.mod becomes unrealistic constrained (the variance drops from 110 to 10) and less converged, which should not be happening because there is no covariance amount those nodes. Interestingly, when I replace the mvnorm distribution with the sequential norm distribution using a for loop (see commented lines below, L3), the issue will be solved, making me wonder if there is something I should look into for the multi-variate normal sampling process. The model that I am using is as follows:
nimble.model <- nimbleCode({
# X model
# for (i in 1:N) {
# X.mod[i] ~ dnorm(mean = muf[i], sd = sqrt(pf[i, i]))
# }
X.mod[1:N] ~ dmnorm(mean = muf[1:N], cov = pf[1:N, 1:N])
for (i in 1:nH) {
tmpX[i] <- X.mod[H[i]]
Xs[i] <- tmpX[i]
}
## add process error to x model but just for the state variables that we have data and H knows who
X[1:YN] <- Xs[1:YN]
## Likelihood
y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN, 1:YN])
})
The figures that show the transitions when increasing the diagonal length are as follows:
I really appreciate your help!
Best,
Dongchen