error in butterfly part of trinity run

357 views
Skip to first unread message

Edwin Solares

unread,
Jul 14, 2015, 6:54:02 PM7/14/15
to trinityrn...@googlegroups.com
Hi I am getting several of these errors when trinity tries to run butterfly:

Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit

I have been able to run which java and I am given a correct location and version 1.7. I have also tested memory requirements and have seen that 24GB is the maximum that the java vm will allow me to allocate, even though the machine has about 30GB of usable RAM (2GB resevered for the OS)

Could you please help with this error?

When I ran Trinity I used max memory option of 24GBs.  Does Trinity try to submit multiple instances of butterfly when used in non-grid mode?

Thank you for your time,

Edwin


Tiago Hori

unread,
Jul 14, 2015, 7:21:49 PM7/14/15
to Edwin Solares, trinityrn...@googlegroups.com
What version are you running? 

T.

"Profanity the is the only language all programmers understand" 
Sent from my iPhone, the universal excuse for my poor spelling.
--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-u...@googlegroups.com.
To post to this group, send email to trinityrn...@googlegroups.com.
Visit this group at http://groups.google.com/group/trinityrnaseq-users.
For more options, visit https://groups.google.com/d/optout.

Edwin Solares

unread,
Jul 14, 2015, 7:29:53 PM7/14/15
to trinityrn...@googlegroups.com, sola...@uci.edu
Hi.

I'm running Trinity 2.0.6

Tiago Hori

unread,
Jul 14, 2015, 8:03:01 PM7/14/15
to Edwin Solares, trinityrn...@googlegroups.com
Yeah, I should have known when you said max memory. Can you run with the --verbose flag?

T.

"Profanity the is the only language all programmers understand" 
Sent from my iPhone, the universal excuse for my poor spelling.

Edwin Solares

unread,
Jul 15, 2015, 1:20:25 PM7/15/15
to trinityrn...@googlegroups.com, sola...@uci.edu
Yes I will run it today.

Edwin Solares

unread,
Jul 15, 2015, 1:39:01 PM7/15/15
to trinityrn...@googlegroups.com, sola...@uci.edu
Hi I was able to run it now and in the stdout i receive the following error serveral times with the exceptiong that the number of failed increases and the number following failed_butterfly_commands.<int>.txt increments

Here is the error that repeats:

We are sorry, commands in file: [failed_butterfly_commands.1265.txt] failed.  :-( 

Trinity run failed. Must investigate error above.
succeeded(1351), failed(6811)   39.6271% completed.    Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

the hs_err log file for this run shows this at the beginning:

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (gcTaskThread.cpp:46), pid=99636, tid=47192577210112
#
# JRE version:  (7.0_79-b15) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#

---------------  T H R E A D  ---------------

Current thread (0x00002aebe4009000):  JavaThread "Unknown thread" [_thread_in_vm, id=99669, stack(0x0000$

Stack: [0x00002aebe0bcf000,0x00002aebe0cd0000],  sp=0x00002aebe0cce540,  free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x9a32da]  VMError::report_and_die()+0x2ea
V  [libjvm.so+0x497f7b]  report_vm_out_of_memory(char const*, int, unsigned long, char const*)+0x9b
V  [libjvm.so+0x5584ba]  GCTaskThread::GCTaskThread(GCTaskManager*, unsigned int, unsigned int)+0x11a
V  [libjvm.so+0x557a38]  GCTaskManager::initialize()+0x2b8
V  [libjvm.so+0x841818]  ParallelScavengeHeap::initialize()+0x6f8
V  [libjvm.so+0x9751aa]  Universe::initialize_heap()+0xca
V  [libjvm.so+0x976379]  universe_init()+0x79
V  [libjvm.so+0x5b1d25]  init_globals()+0x65
V  [libjvm.so+0x95dc6d]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x1ed
V  [libjvm.so+0x639fe4]  JNI_CreateJavaVM+0x74
C  [libjli.so+0x2f8e]  JavaMain+0x9e


---------------  P R O C E S S  ---------------

Java Threads: ( => current thread )

Other Threads:

=>0x00002aebe4009000 (exited) JavaThread "Unknown thread" [_thread_in_vm, id=99669, stack(0x00002aebe0bc$

VM state:not at safepoint (not fully initialized)

VM Mutex/Monitor currently owned by a thread: None

GC Heap History (0 events):
No events

Deoptimization events (0 events):
No events

Internal exceptions (0 events):
No events

Events (0 events):
No events

***several dynamic libs here****

VM Arguments:
jvm_args: -Xmx64m
java_command: /home1/02320/esolares/bin/trinityrnaseq-2.0.6/util/support_scripts/ExitTester.jar 0
Launcher Type: SUN_STANDARD


---------------  S Y S T E M  ---------------

OS:CentOS release 6.6 (Final)

uname:Linux 2.6.32-431.17.1.el6.x86_64 #1 SMP Wed May 7 23:32:49 UTC 2014 x86_64
libc:glibc 2.12 NPTL 2.12
rlimit: STACK infinity, CORE 0k, NPROC 8192, NOFILE 16384, AS infinity
load average:13.46 12.91 9.02

/proc/meminfo:
MemTotal: 32815324 kB
MemFree:        21315772 kB
Buffers:            7040 kB
Cached:           440716 kB
SwapCached:            0 kB
Active:          6330988 kB
Inactive:         260864 kB
Active(anon):    6144080 kB
Inactive(anon):      188 kB
Active(file):     186908 kB
Inactive(file):   260676 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:              8348 kB
Writeback:             0 kB
AnonPages: 6144100 kB
Mapped:            24568 kB
Shmem:               188 kB
Slab:            1629168 kB
SReclaimable:   56840 kB
SUnreclaim: 1572328 kB
KernelStack:        5184 kB
PageTables:        21560 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    31830864 kB
Committed_AS:   28959656 kB
VmallocTotal:   34359738367 kB
VmallocUsed:     9582704 kB
VmallocChunk:   34331929016 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        4096 kB
DirectMap2M:     2076672 kB
DirectMap1G:    31457280 kB

*** a bunch of info in regards to intel cores***

Memory: 4k page, physical 32815324k(21307480k free), swap 0k(0k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (24.79-b02) for linux-amd64 JRE (1.7.0_79-b15), built on Apr  10 2015 11:34:48 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)

time: Wed Jul 15 12:26:12 2015
elapsed time: 0 seconds

So it appears as though I'm hitting a memory ceiling but I have tried several values below 20G for the max_memory variable.

Tiago Hori

unread,
Jul 15, 2015, 1:57:08 PM7/15/15
to Edwin Solares, trinityrn...@googlegroups.com
Can you send us the command you run and the specs of your computer or node or wherever you are running it?

Also, in the meantime you could try the Galaxy Trinity server.

T.

"Profanity the is the only language all programmers understand" 
Sent from my iPhone, the universal excuse for my poor spelling.

Edwin Solares

unread,
Jul 15, 2015, 2:08:56 PM7/15/15
to trinityrn...@googlegroups.com, sola...@uci.edu
Hi,

As noted above, the system has 32GB of RAM but only about 28-30GB of usable RAM, it is a 16 core Intel cpu. It is a standard node over at Stampede. More detailed information can be found here:

I understand there is a way to use SLURM but I have not tested it yet.

The parameters I am using are:

Trinity --seqType fq --max_memory 16G --left <several sequences> --right <several sequences>  -CPU 16 --normalize_reads --verbose

In regards to using the trinity galaxy server, I am not sure if I can upload 350GB of data and if I did would probably take a very long time to upload. Would I be able to transfer via globus?

Thank you,

Edwin

Tiago Hori

unread,
Jul 15, 2015, 2:19:37 PM7/15/15
to Edwin Solares, trinityrn...@googlegroups.com
The problem is clear I think, the cause is a different issue. You are running out of memory. You could reduce the butterfly heap space to 8G. It will slow you down, but might help. How many sequences do you have? I am surprised you made through normalization.

T.

"Profanity the is the only language all programmers understand" 
Sent from my iPhone, the universal excuse for my poor spelling.

Edwin Solares

unread,
Jul 15, 2015, 2:36:10 PM7/15/15
to trinityrn...@googlegroups.com, sola...@uci.edu
So I'm running a master library and in parallel in other nodes subsets of libraries 

When I run all of the libraries I do indeed get errors in normalization, but I was not sure if it had to do with the spaces in the headers. I am waiting to rerun the the master set. The subset libs contains 335 million reads for each pair.

I have just submitted a test run the the largemem nodes that have 32 cores and 1TB of RAM. I set the max_memory parameter to 900G and I am no longer getting errors in butterfly.

In regards to the master list:

The master list contains between 1.5 and 2 billion reads for each pair. Would this not be able to make it past the normalization just due to the shear size of my dataset? even if I ran it on the 1TB node?

Fulton, Ben

unread,
Jul 15, 2015, 3:01:25 PM7/15/15
to Edwin Solares, trinityrn...@googlegroups.com

Hi,

 

The max_memory parameter actually doesn’t apply to the Butterfly section. Trinity will try to launch butterfly instances up to the number of cores you specify in the command line. Butterfly is a Java app and each instance should default to 4G, so with your 16 cores you would need 64G to run successfully. There are parameters you can change to fiddle with this value – I think they still work in 2.0.6 – such as bflyHeapSpaceMax and bflyCPU, or the system will try to calculate reasonable values for you if you specify –bflyCalculateCPU.

 

 

--

Ben Fulton

Research Technologies

Scientific Applications and Performance Tuning

Indiana University

E-Mail: befu...@iu.edu

Tiago Hori

unread,
Jul 15, 2015, 3:06:20 PM7/15/15
to Edwin Solares, trinityrn...@googlegroups.com
It should run on the 1TB mode fine. The issue is a normalization, after you are fine that, a vast amount of reads are not used. Obviously butterfly try to run its commands in parallel to be faster, but sometimes it overflows the memory in smaller systems. It should not give you grief with 1TB. I have run 1 billion reads with 256.

The most likely reason for you normalization problem is you are running out memory during kmer counting.

T.

"Profanity the is the only language all programmers understand" 
Sent from my iPhone, the universal excuse for my poor spelling.

Fulton, Ben

unread,
Jul 15, 2015, 3:28:42 PM7/15/15
to Edwin Solares, trinityrn...@googlegroups.com

 

--

Ben Fulton

Research Technologies

Scientific Applications and Performance Tuning

Indiana University

E-Mail: befu...@iu.edu

 

From: trinityrn...@googlegroups.com [mailto:trinityrn...@googlegroups.com] On Behalf Of Edwin Solares
Sent: Wednesday, July 15, 2015 2:09 PM
To: trinityrn...@googlegroups.com
Cc: sola...@uci.edu
Subject: Re: [trinityrnaseq-users] error in butterfly part of trinity run

 

Hi,

Edwin Solares

unread,
Jul 15, 2015, 3:36:39 PM7/15/15
to Fulton, Ben, trinityrn...@googlegroups.com
Sounds good. Thank you.

I will see if my job fails again at stampede, and if it does, I will ask for help uploading it via globus.

Thank you very much for your time,

Edwin Solares, B.S.
Computational Biology
Department of Ecology and Evolutionary Biology
413 Steinhaus Hall
University of California, Irvine
Irvine, CA 92697
USA

Ken Field

unread,
Jul 16, 2015, 6:07:34 AM7/16/15
to Edwin Solares, Fulton, Ben, trinityrn...@googlegroups.com
Edwin-
I would try using fewer CPU because for the butterfly step it is trying to run multiple processes at once and it is running out of memory. The other solution would be to use one of the large memory nodes on Stampede, as recommended here:

Ken
Ken Field, Ph.D.
Associate Professor of Biology
Program in Cell Biology/Biochemistry
Bucknell University
Room 203A Biology Building

Edwin Solares

unread,
Jul 17, 2015, 2:56:58 PM7/17/15
to trinityrn...@googlegroups.com, sola...@uci.edu, befu...@iu.edu
Hi

I was able to run 2 subsets successfully for 12 hours in the largemem queue, the others however failed. Due to some other issues though. 

Thank you very much for your help. Now I will just need to clean up my data.
Reply all
Reply to author
Forward
0 new messages