Hello and thank you in advance to anybody that can shed some light for
me.
So I believe I have properly built the latest versions of Beast and
beagle-lib to work with my CUDA installation. I can process the
benchmark.xml file using the following command from a folder inside
the Beastv1.5.2 directory
java -Djava.library.path=/usr/local/lib -cp ../lib/beast.jar:../lib/
beast-beagle.jar dr.app.beast.BeastMain -beagle ../examples/
benchmark.xml
Using BEAGLE resource 0 : CPU
Flags: DOUBLE SINGLE ASYNCH SYNCH COMPLEX CPU SSE
....
Time taken: 1.2012666666666665 minutes
by adding the option -Dbeagle.resource.order='1' or -
Dbeagle.resource.order='2' beagle will use the correct GPU instead of
the CPU
java -Dbeagle.resource.order='1' -Djava.library.path=/usr/local/lib -
cp ../lib/beast.jar:../lib/beast-beagle.jar dr.app.beast.BeastMain -
beagle data/Multigene.xml
Using BEAGLE resource 1 : Tesla C1060
Global memory (MB): 4096
Clock speed (Ghz): 1.30
Number of cores: 240
Flags: SINGLE ASYNCH SYNCH COMPLEX LSCALE GPU
....
Time taken: 2.3076 minutes
My problem occurs when trying to process a different set of data. In
this Multigene.nex file "The bayes block is written into the nexus
file and the evolutionary models are partitioned for each gene" I am a
cse guy so that is what the biologist told me. I used BEAUti to open
and export a beast Multigene.xml file from the Multigene.nex
original.
Regular beast runs the file in about 45 minutes on 8 cores.
Using beagle with the CPU takes about 25 minutes
Trying to use beagle with either of the GPU's leads to the following
error:
Error running file: Multigene.xml
The initial model is invalid because state has a zero probability.
If the log likelihood of the tree is -Inf, his may be because the
initial, random tree is so large that it has an extremely bad
likelihood which is being rounded to zero.
Alternatively, it may be that the product of starting mutation rate
and tree height is extremely small or extremely large.
Finally, it may be that the initial state is incompatible with
one or more 'hard' constraints (on monophyly or bounds on parameter
values. This will result in Priors with zero probability.
The individual components of the posterior are as follows:
The initial posterior is zero:
CompoundLikelihood(compoundModel)=
(OneOnX(ac)=0.0,
OneOnX(ag)=0.0,
OneOnX(at)=0.0,
OneOnX(cg)=0.0,
OneOnX(gt)=0.0,
OneOnX(constant.popSize)=1.6094,
CoalescentLikelihood(coalescentLikelihood)=63.6527),
CompoundLikelihood(compoundModel)=(BeagleTreeLikelihood
(treeLikelihood)=-63200.4734,
BeagleTreeLikelihood(treeLikelihood)=-Inf, BeagleTreeLikelihood
(treeLikelihood)=-41101.3766,
BeagleTreeLikelihood(treeLikelihood)=-14636.4653)
For more information go to <
http://beast.bio.ed.ac.uk/>.
Exception in thread "main" java.lang.RuntimeException: ErrorLog:
Maximum number of errors reached.
Terminating BEAST
at dr.util.ErrorLogHandler.publish(Unknown Source)
at java.util.logging.Logger.log(Logger.java:476)
at java.util.logging.Logger.doLog(Logger.java:498)
at java.util.logging.Logger.log(Logger.java:521)
at java.util.logging.Logger.severe(Logger.java:1008)
at dr.app.beast.BeastMain.<init>(Unknown Source)
at dr.app.beast.BeastMain.main(Unknown Source)
Can anybody help me figure out how to get around this? Wouldn't all of
the initial settings and constraints and so on be the same for CPU or
GPU configurations? Or does the actual beast.xml file need to be
tailored to run on the GPU for larger datasets?
Thanks again,
-Nick