"Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
stan::prob::multi_normal_log(N4stan5agrad3varE): LDLT_Factor of covariance parameterunderlying matrix is not positive definite. LDLT_Factor of covariance parameterlast conditional variance is 0:0."
I need some help in order to fix this issue. Please find attached my stan code and the dataset.
PS: I am using CmdStan v2.5
Thanks,
Sarthak
However, I get an error message that says,"Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
stan::prob::multi_normal_log(N4stan5agrad3varE): LDLT_Factor of covariance parameterunderlying matrix is not positive definite. LDLT_Factor of covariance parameterlast conditional variance is 0:0."
parameters {
simplex[K] theta;
cholesky_factor_corr[nDims] L;
vector[nDims] z_mu[K];
vector[nDims] z_beta;
vector[nDims] sigma_diag;
}
transformed parameters {
vector[nDims] mu[K];
vector[nDims] beta;
matrix[nDims,nDims] sigma_L;
sigma_L <- diag_pre_multiply(sigma_diag, L);
beta <- Avg_mean + sigma_L * z_beta;
for(k in 1:K) mu[k] <- beta + sigma_L * z_mu[k];
}
model {
real ps[K];
L ~ lkj_corr_cholesky(2);
sigma_diag[1] ~ normal(201,5);
sigma_diag[2] ~ normal(-31,3);
sigma_diag[3] ~ normal(15,4);
sigma_diag[4] ~ normal(14800, 8100);
sigma_diag[5] ~ normal(60, 35);
z_beta ~ normal(0,1); # implies beta ~ multi_normal_cholesky(Avg_mean, sigma_L);
for (k in 1:K) {
z_mu[k] ~ normal(0,1); # implies mu[k] ~ multi_normal_cholesky(beta,sigma_L);
}
for (n in 1:N) {
for (k in 1:K) {
ps[k] <- log(theta[k]) + multi_normal_cholesky_log(y[n], mu[k], sigma_L);
}
increment_log_prob(log_sum_exp(ps));
}
}
However, your code can be made more efficient by following these two rules:
1. Don't use any variant of multi_normal() when the left-hand side is NOT data; use the stochastic representation of the multinormal distribution
2. Don't use multi_normal() when you already have a Cholesky factor and thus can use multi_normal_cholesky()
Hi Ben,Thanks a lot for helping me on those rules. I understand them now, and it makes sense, in terms of numerical stability.However, when I execute the code(for sampling), it says something like," Gradient evaluation took 0.112778 seconds, 1000 transitions using 10 leapfrog steps per transition would take 1127.78 seconds.Adjust your expectations accordingly! "
And, then the execution goes up till 300/2000 iterations, and stops. (I did give it about 30 - 45 mins for the same). I was under the impression that the entire run would be taking around 1127.78*2 seconds(or close to 40 minutes), but it seems that I am incorrect in this case.Could you please help me in "adjusting my expectations" ?
"# Elapsed Time: 251.072 seconds (Warm-up), 245.781 seconds (Sampling), 496.853 seconds (Total)"
whereas the last time, it got stuck at 300/2000 iterations for about 30 minutes(before I manually terminated the code). Any suggestions ?
Regards,
Sarthak
Well, this is strange. I recompiled the code again, and started the code. And the execution was over in like 10 minutes."# Elapsed Time: 251.072 seconds (Warm-up), 245.781 seconds (Sampling), 496.853 seconds (Total)"
whereas the last time, it got stuck at 300/2000 iterations for about 30 minutes(before I manually terminated the code). Any suggestions ?