Hi Maude,
Unfortunately, there's no one setting that will work for every data set, just like there's no one run length for your BEAST run that will give you adequate ESS values for every data set (unless it's billions of iterations).
So try something reasonable based on the time it would take to compute.
If you can for example do 5 million iterations per hour, then run 50 path steps for 1 million iterations each, and you should have a first result after approximately 50 hours.
To check if your result is stable or has converged, run it a bit longer, i.e. 100 path steps for 1 million iterations each.
If the two results are close to one another, you can stop and use that estimate as your final one.
A standard BEAST run gathers samples from a power posterior run with power = 1.
To perform marginal likelihood estimation, we need to gather samples from a whole series of power posteriors, i.e. power = 1, 0.99, 0.98, ..., 0.01, 0.0 (here the power are uniformly distributed between 1 and 0, just to make my point).
So while you may have a large number of samples from one power posterior, you still need samples from the other 100 power posteriors, which is why your samples from your BEAST run alone will be insufficient.
There are no real shortcuts I'm afraid, although we are continuing to work on new methods to make the computations less demanding and hence faster.
Best regards,
Guy
Op woensdag 17 juni 2015 12:25:45 UTC+2 schreef Mj: