Hello all,
I found an interesting behavior that I am not being able to understand.
On the gravity example I replaced
QUESO::GslVector paramInitials(paramSpace.zeroVector());
priorRv.realizer().realization(paramInitials);
QUESO::GslMatrix proposalCovMatrix(paramSpace.zeroVector());
proposalCovMatrix(0,0) = std::pow(std::abs(paramInitials[0]) / 20.0, 2.0);
ip.solveWithBayesMetropolisHastings(NULL, paramInitials, &proposalCovMatrix);
By
ip.solveWithBayesMLSampling();
And on the input file I deleted all the mh_ options and added the ML options:
ip_ml_default_rawChain_size = 100
ip_ml_default_rawChain_dataOutputFileName = outputData/cal_rawChain_ml
ip_ml_last_rawChain_size = 1000
If I use the ip_ml_default_rawChain_size = 100 and fp_mc_qseq_size = 100
The code works perfectly with any combination of processors and environment. However, if I use 4 processors and 2 subenvironments, and use fp_mc_qseq_size >100 the code get stuck after 100 samples for the Monte carlo forward problem.
If I change for 2 processors and 2 subenvironments I dont have any issue.
If I use ip_ml_default_rawChain_size = 1000 I dont have any issue using all kind of processor/sub-environment combination and size of fp_mc_qseq_size.
It maybe nothing important as the number of samples is to small. I just had this issue while I was testing the code and wondered if someone would know why I get this behavior.
I am using QUESO 0.5.6.0.
Thanks
Ernesto