snapp multithreading on LSF problems

172 views
Skip to first unread message

Artem Pankin

unread,
May 16, 2018, 2:04:22 PM5/16/18
to beast-users
Hi 

When I run SNAPP on an LSF cluster, my jobs get almost instantly killed after reaching LSF thread limit set by the sysadmins despite me specifying -threads parameter. Could it be that -threads specifies number of cores / processes but not actual threads? 

Is there a way to circumvent this without reconfiguring the cluster?


Thank you

Artem


Remco Bouckaert

unread,
May 16, 2018, 3:55:12 PM5/16/18
to beast...@googlegroups.com
Hi Artem,

Just confirming that the -threads parameter specifies the  number of threads, which usually get spread over available cores. However, there should be only 1 process with BEAST v2.4.x, and depending on how you start 2 processes with BEAST v2.5.0.

If you start using the beast script, there will be one process for BeastLauncher, which creates a process that runs BEAST with the number of threads specified by the -threads parameter.

If you start by calling java with an explicitly specified class path (which is a bit of a hassle, and is what BeastLauncher takes care of) there will be only one process with the specified number of threads.

Hope this helps 

Remco

--
You received this message because you are subscribed to the Google Groups "beast-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beast-users...@googlegroups.com.
To post to this group, send email to beast...@googlegroups.com.
Visit this group at https://groups.google.com/group/beast-users.
For more options, visit https://groups.google.com/d/optout.

Artem Pankin

unread,
May 17, 2018, 2:30:42 PM5/17/18
to beast-users
Hi Remco,

Thanks for clarifying, however I'm still slightly confused what happens. I guess I'm launching beast (v2.5.0) using BeastLauncher. I'm pasting the command and the message of the LSF. I will greatly appreciate if you advise me how I could avoid beast using 232 threads.

# LSBATCH: User input
./beast -threads 2 -seed 123456789 defaults_snapp.xml
------------------------------------------------------------

TERM_THREADLIMIT: job killed after reaching LSF thread limit.
Exited with exit code 130.

Resource usage summary:

    CPU time   :      1.02 sec.
    Max Memory :     10372 MB
    Max Swap   :     54017 MB

    Max Processes  :         5
    Max Threads    :       232


Artem

Artem Pankin

unread,
May 28, 2018, 2:40:53 PM5/28/18
to beast-users
 Any ideas what I might be doing wrong? Sorry for the hassle.

Thanks
Artem


On Thursday, May 17, 2018 at 8:30:42 PM UTC+2, Artem Pankin wrote:
Hi Remco,

Thanks for clarifying, however I'm still slightly confused what happens. I'm launching beast (v2.5.0) using BeastLauncher. Below, I pasted the command and the message from the LSF. I will greatly appreciate if you advise me how I could avoid beast / SNAPP using 232 threads and thus being killed by the LSF.

Remco Bouckaert

unread,
May 28, 2018, 3:17:13 PM5/28/18
to beast...@googlegroups.com
Not sure what is going wrong, perhaps you can ask the cluster administrator for advice.

Another thing to try is to not use the launcher, but start beast directly using

java -Dbeast.load.jars=true /path/to/beast.jar -threads 2 -seed 123456789 defaults_snapp.xml

where “/path/to” is replaced with the actual path to where beast.jar is. This should save one process, which maybe confusing the scheduler when using the launcher.

Cheers,

Remco

Artem Pankin

unread,
May 29, 2018, 10:31:16 AM5/29/18
to beast-users
Remco, thanks for the hint. I launched it directly with (the command you suggested didn't work) ...

java -cp ::/home/user/.beast/2.5/SNAPP/lib/snap.addon.jar:/home/user/.beast/2.5/BEAST/lib/beast.jar beast.app.beastapp.BeastMain -threads 2 -seed 123456789 defaults_snapp.xml

...and the number of threads now is within the cluster limit, however still very high.

Resource usage collected.
MEM: 10 Gbytes;  SWAP: 40 Gbytes;  NTHREAD: 117

I guess it has something to do with how beast / SNAPP manages sleeping threads, because according to htop only two threads are actively used (CPU% ~ 200%) as specified. Weird.


Cheers

Artem
Reply all
Reply to author
Forward
0 new messages