JSON does not allow non-finite numbers error, BEAST 2.2.1

759 views
Skip to first unread message

Justin

unread,
Apr 7, 2015, 6:10:10 AM4/7/15
to beast...@googlegroups.com
Tried running an xml on two different computers, with the same error after receiving some starting likelihood:

Start likelihood: -80425.53335934287

         Sample      posterior ESS(posterior)     likelihood          prior

org.json.JSONException: JSON does not allow non-finite numbers.
    at org.json.JSONObject.testValidity(Unknown Source)
    at org.json.JSONObject.numberToString(Unknown Source)
    at org.json.JSONObject.valueToString(Unknown Source)
    at org.json.JSONWriter.value(Unknown Source)
    at org.json.JSONWriter.value(Unknown Source)
    at beast.core.Operator.storeToFile(Unknown Source)
    at beast.core.OperatorSchedule.storeToFile(Unknown Source)
    at beast.core.MCMC.doLoop(Unknown Source)
    at beast.core.MCMC.run(Unknown Source)
    at beast.app.BeastMCMC.run(Unknown Source)
    at beast.app.beastapp.BeastMain.<init>(Unknown Source)
    at beast.app.beastapp.BeastMain.main(Unknown Source)
    at beast.app.beastapp.BeastLauncher.main(Unknown Source)


I've attached the XML and I'm not sure where I should start looking for the problem.  To summarize the run, I have 3 partitions of mtDNA.  Tree and clock are linked across partitions.  I have one dated tip using aDNA.  I'm estimating GTR for sites model, I'm estimating a strict clock, and I selected constant population size. I haven't messed with any priors except the clock, making it a gamma distribution.

Any insight? Much appreciated.




Beast_4515.xml

Remco Bouckaert

unread,
Apr 7, 2015, 3:00:13 PM4/7/15
to beast...@googlegroups.com
Hi Justin,

One thing I noticed in the XML file is that there is a large pre-burnin specified. When I set preburnin to zero, it seems to run fine (for the first million samples at least), but I get the same error when having a large burnin.
I logged an issue to fix this bug.

Cheers, 
Remco

--
You received this message because you are subscribed to the Google Groups "beast-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beast-users...@googlegroups.com.
To post to this group, send email to beast...@googlegroups.com.
Visit this group at http://groups.google.com/group/beast-users.
For more options, visit https://groups.google.com/d/optout.
<Beast_4515.xml>

Justin

unread,
Apr 9, 2015, 3:00:53 PM4/9/15
to beast...@googlegroups.com
Hi!

Thanks for the suggestion and 'fix'  - you are correct, without the burn-in, it runs fine.  In, fact it runs fine up to ~100,000 not-logged samples, but above that it bites the dirt.  I wouldn't have even thought to look there, so kudos.

I'm trying to duplicate an earlier analysis on the original beast - should I just do the sample removal with Logcombiner/TreeAnnotator after the fact?  So if the original analysis says 20,000,000 generations with 6,000,000 burnin, do I run Beast 2 26,000,000 generations and remove those later?  If so, what exactly does the pre-burnin in Beast 2 doing that's different?

Justin

Remco Bouckaert

unread,
Apr 9, 2015, 3:44:54 PM4/9/15
to beast...@googlegroups.com
Hi Justin,

Just confirming that the pre-burnin is indeed the cause of the problem, and the bug is fixed in the development code, so will be available in the next release.

Removing burn-in is indeed in Tracer, LogCombiner and TreeAnnotator is indeed what you should do. It may not be necessary to remove all 6 million states -- a look at the traces in Tracer should give you a good idea when you are well on your way through burn-in, and this may be much more or less than 6 million states. It is a good idea to inspect your traces in Tracer anyway to detect anomalies in the MCMC runs.

When you specify pre-burnin, BEAST just runs the chain without logging any results, so it effectively removes a pre-specified amount of burn in from the logs.

Cheers,

Remco
Reply all
Reply to author
Forward
0 new messages