Let's keep queso-users in the cc list with a 'reply-all' so others can
chime in if they want to.
On Wed, 8 Feb 2017, at 20:22, Han Lu wrote:
> Hi Damon,
>
> Thank you for your reply, and sorry for the late reply.
> 1. I suppose the cost function you said is the Rosen-Brock function. What
> do you mean by optimizing it? I just used this function:
> [image: Inline image 1]
I presume you mean to say that this is your log-likelihood? If so, then
your likelihood is proportional to exp(-f). This gets multiplied by the
prior distribution to make the posterior (up to a constant of
proportionality). The cost-function is -log(posterior). QUESO has the
capability to optimise this function and use the result of the
optimisation to start the Markov chain Monte Carlo process. See here
for an example on optimising before sampling:
https://github.com/libqueso/queso/blob/hotfix-0.56.2/test/test_optimizer/test_seedwithmap_fd.C#L73
> 2. I started the chain with initial value [5 5 5 5 5] for no particular
> reason, it can be other values.
Yes that's true. The further you start away from stationarity, the
longer it'll take to converge. I was just curious as to how far away
from stationarity you started. If you're optimising before sampling,
then this point isn't used to initialise the chain, it's used to
initialise the optimiser.
> 3. Did you mean the prior covariance matrix? I set it to be a diagonal
> matrix and the elements on the diagonal are just some values I randomly
> chose(sorry I don't have the code currently). I am also confused about
> how
> to set a proper covariance matrix, because I found out that the result
> was
> obviously effected by the cov matrix. The question here is with the same
> parameters, I cant have all the chains correct when running them
> simultaneously, but I can get good result with one chain each time I run
> it.
If your prior is Gaussian, then yes I'm talking about the prior
covariance matrix (and prior mean). Your prior could have some other
distribution, though.
Yes, different prior covariance matrices (and hence prior distribution)
yield different posteriors, so you'll get different results. In fact
assuming your log-likelihood is the Rosenbrock function any prior that
is not the Uniform distribution will give you a cost function that is
not the Rosenbrock function. And if you're giving the prior covariance
matrix small entries in some elements along the diagonal, it may be the
case that your posterior basically looks Gaussian in those directions.
To bring it back to your original question. Two chains versus one
shouldn't affect the converged result. How many samples are you running
for? If you start the chain at [1 1 1 1 1] does it stay there? If your
prior is Gaussian away from [1 1 1 1 1] then the converged result will
*not* be [1 1 1 1 1], it'll be whatever the posterior mode is (there
could be more than one).
>
> Thank you very much!
> Good night,
> Han
>
> Best regards,
> Han
> Email had 1 attachment:
> + image.png
> 14k (image/png)