Dear Keith
I'm not an author of nimble, but perhaps this can help...
In your model, response[n, ] is a vector that holds three realisations of a length 3 mvnorm random variable.
For each element of a given multi-variate (length 3) sub-vector getLogProb returns the same value. e.g.
getLogProb(cmodel,"response[1,1]") # -3.034255
getLogProb(cmodel,"response[1,2]") # -3.034255
getLogProb(cmodel,"response[1,3]") # -3.034255
getLogProb(cmodel,"response[1,4]") # -4.096456
getLogProb(cmodel,"response[1,5]") # -4.096456
getLogProb(cmodel,"response[1,6]") # -4.096456
getLogProb(cmodel,"response[1,7]") # -4.414663
getLogProb(cmodel,"response[1,8]") # -4.414663
getLogProb(cmodel,"response[1,9]") # -4.414663
getLogProb(cmodel) # -1372.911
Now let's compare to the values of logProb returned by the MCMC. I use "#" to indicate the output on my machine.
colnames(logProb)[c(1,101,201)] # "logProb_response[1, 1]" "logProb_response[1, 2]" "logProb_response[1, 3]"
tail(logProb, n=1) [c(1,101,201)] # -3.034255 0.000000 0.000000
So we have the same value as above is position 1, and zeros in the following two columns.
colnames(logProb)[300+c(1,101,201)] # "logProb_response[1, 4]" "logProb_response[1, 5]" "logProb_response[1, 6]"
tail(logProb, n=1)[300+c(1,101,201)] # -4.096456 0.000000 0.000000
Again, the same value as above is position 1, and zeros in the following two columns.
colnames(logProb)[600+c(1,101,201)] # "logProb_response[1, 7]" NA NA
tail(logProb, n=1)[600+c(1,101,201)] # -4.414663 NA NA
Again, same value as above, but here nimble has truncated the two columns of zeros (so the indexing results in NA).
sum(tail(logProb, n=1)) # -1372.86
Close to the above value, presumably computational rounding error explains the difference.
All the best
David