BEAST2 "unable to create new native thread" on cluster

174 views
Skip to first unread message

Nitish Narula

unread,
Jul 19, 2016, 2:20:47 PM7/19/16
to beast-users
Hi everyone,

Has anyone had issues with running BEAST2 as an array job on a cluster with SLURM scheduler?

Randomly, some array jobs don't run and report the following error:

Exception in thread "pool-64-thread-6" Exception in thread "pool-64-thread-5" Exception in thread "pool-64-thread-7" Exception in thread "pool-64-thread-8" Exception i
n thread "pool-64-thread-3" Exception in thread "pool-64-thread-2" Exception in thread "pool-64-thread-4" Exception in thread "pool-64-thread-1"
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1018)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I am puzzled as to what might be causing this issue. Array jobs are pretty much identical to each other so the environment within which they are running are very similar. However, some array job do not report any errors and run just fine, while others don't.

I am using Beast v2.4.2 and Beagle version 2.1.2 (1262).

My alignment has bunch of partitions so I am using instances and threads option, as follows:

$ beast -beagle_SSE -instances 8 -threads 8 <xml file>

The SBATCH command instantiates every array job with 8 cpus on a single node, and 4 GB per cpu . Again this configuration shouldn't be what is causing the issue because some array jobs work just fine. 

Any ideas/suggestions will be helpful!

Thanks!
Nitish

Remco Bouckaert

unread,
Jul 19, 2016, 3:29:38 PM7/19/16
to beast...@googlegroups.com
Hi Nitish,

The error message suggests that the job requires more memory (java.lang.OutOfMemoryError: unable to create new native thread). By default, 4 gigabyte of memory is reserved for all of BEAST. You can edit the beast script and replace -Xmx4g in the last lines of the script with a higher value — say -Xmx8g to allocate 8 gigabyte. 

You mention that you allocate 4GB per cpu, not per core. Perhaps you need to increase the SBATCH command to increase to the same amount you allocate for BEAST. 

Hope this helps,

Remco


--
You received this message because you are subscribed to the Google Groups "beast-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beast-users...@googlegroups.com.
To post to this group, send email to beast...@googlegroups.com.
Visit this group at https://groups.google.com/group/beast-users.
For more options, visit https://groups.google.com/d/optout.

Nitish Narula

unread,
Jul 22, 2016, 5:09:36 PM7/22/16
to beast-users
Thanks, Remco. This did the trick but I am still wondering why some jobs ran with Xmx4g. In previous tests before submitting the jobs on the cluster, I ran beast with 4GB memory on a compute server, and it worked fine, too. 

Thanks again,
Nitish
Reply all
Reply to author
Forward
0 new messages