Inchworm ran out of memory

857 views
Skip to first unread message

தினேஷ் குமார் சு

unread,
Aug 26, 2015, 12:40:58 AM8/26/15
to trinityrnaseq-users
 hi,

last month i installed Trinity and complete one set of my transcriptome file.

now i have new problem for for the second and third set of my transcriptome files.

i did not change any cmd. i am simply use the same cmd i used for the first set of transcriptome file.

The second and third set of file not completed because of the following error.

bash: line 1: 25736 Aborted                 (core dumped) /home/ubuntu/Desktop/software/trinityrnaseq-2.0.6/Inchworm/bin//inchworm --kmers jellyfish.kmers.fa --run_inchworm -K 25 -L 25 --monitor 1 --keep_tmp_files --num_threads 5 --PARALLEL_IWORM > /home/ubuntu/Desktop/test2/trinity_out_dir/inchworm.K25.L25.fa.tmp 2> /dev/null
Error, cmd: /home/ubuntu/Desktop/software/trinityrnaseq-2.0.6/Inchworm/bin//inchworm --kmers jellyfish.kmers.fa --run_inchworm -K 25 -L 25 --monitor 1   --keep_tmp_files  --num_threads 5  --PARALLEL_IWORM  > /home/ubuntu/Desktop/test2/trinity_out_dir/inchworm.K25.L25.fa.tmp 2>/dev/null died with ret 34304 at /home/ubuntu/Desktop/software/trinityrnaseq-2.0.6/Trinity line 2116.

If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data set or run Trinity on a server with more memory available.


but my computer have 900GB free space.


anyone pls help to solve this problem

Mark Chapman

unread,
Aug 26, 2015, 3:23:17 AM8/26/15
to தினேஷ் குமார் சு, trinityrnaseq-users
Hello,
I'm guessing the difference is 'free space' and 'memory'. You ran out of memory in your command. What was your initial command? Trinity recommends:
"A basic recommendation is to have 1G of RAM per 1M pairs of Illumina reads"
How many reads do you have, if its a lot try the normalisation step (http://trinityrnaseq.github.io/#insilinorm). Also what is your command? 
See if this helps, Good luck, Mark



--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-u...@googlegroups.com.
To post to this group, send email to trinityrn...@googlegroups.com.
Visit this group at http://groups.google.com/group/trinityrnaseq-users.
For more options, visit https://groups.google.com/d/optout.



--
Dr. Mark A. Chapman
+44 (0)2380 594396
------------------------------------
Centre for Biological Sciences
University of Southampton
Life Sciences Building 85
Highfield Campus
Southampton
SO17 1BJ

தினேஷ் குமார் சு

unread,
Aug 26, 2015, 7:25:42 AM8/26/15
to trinityrnaseq-users, dinesh...@gmail.com
tnx for the reply
Dr. Mark A. Chapman

i am using the following cmd for my assembly

Trinity --seqType fq --max_memory 6G --single R133-L7-P09-ATGTCA-Sequences.txt.gz --SS_lib_type F --CPU 6

my transcriptome file is single end read not a paired end read. Then my computer have only 8gb of ram and total reads of my transcriptome file have 26M reads. As per your suggestion i need 26GB RAM to run this file.


but here i have one doubt, i already run one file
without any problem using the above cmd and same computer. That file also have 26M reads.
HOW ??????

then i am newbie in linux software. i saw the
normalization method in Trinity webpage . now i am confused with that method. i know its look like normal Trinity assembly some extra features or added. But i dont know how to set the correct value for my file. So pls give some detail about that method or suggest good review article.

once again tnx for the reply. and sry for my poor english. 



Mark Chapman

unread,
Aug 26, 2015, 7:54:17 AM8/26/15
to தினேஷ் குமார் சு, trinityrnaseq-users

Hi,
The normalisation is built in to trinity so you can just add the normalise flag to your command. You can change the coverage parameter but leaving it as default will probably suffice.
Also you can run on galaxy to get a lot more RAM
Hope this helps, Mark

தினேஷ் குமார் சு

unread,
Aug 26, 2015, 9:11:48 AM8/26/15
to trinityrnaseq-users, dinesh...@gmail.com
Tnx for the Quick reply.

i will try the normalisation with default value. 

sry for inconvenience
do u know which paper or review article suitable for that type of work. because i want to know about that values. 
now lot of people try with different k-mer and get a better result with low error. so that i want to know about that values and how that values are give better result.

i think this info will help lot of newbie work on NGS transcriptome analysis.


Mark Chapman

unread,
Aug 26, 2015, 4:12:55 PM8/26/15
to தினேஷ் குமார் சு, trinityrnaseq-users

Hi, I think what you're thinking of is changing the kmer for assembly. For the normalisation the parameter essentially just changes the number of identical kmers which remain. So changing from 30 (default) to a higher value is unlikely to change your assembly (but you can try!).
The trinity website is a good starting point for more info but often comparing different assemblies of the same data isn't carried out. Check out a program called detonate which will compare your assembly to your raw data and give you a likelihood score. In theory the assembly with the highest likelihood is the best one you have.
Thanks, Mark

தினேஷ் குமார் சு

unread,
Aug 27, 2015, 1:07:18 AM8/27/15
to trinityrnaseq-users, dinesh...@gmail.com
tnx Dr. Mark A. Chapman

for the valuable suggestion. it will help to improve my NGS analysis.

Maripaz Celorio

unread,
Jan 13, 2017, 3:43:41 AM1/13/17
to trinityrnaseq-users, dinesh...@gmail.com

Hello!

I ran into a similar situation. I must add that I have normalized my reads twice, once outside of trinity and the second during the assembly. So  I got my 2 first de novo assemblies using trinity just fine, but the third gave me this message:

If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data set or run Trinity on a server with more memory available.

succeeded(62683), failed(1)   86.4917% completed.    slurmstepd: error: _get_pss: ferror() indicates error on file /proc/23208/smaps
succeeded(72473), failed(1)   100% completed.

We are sorry, commands in file: [FailedCommands] failed.  :-(

Trinity run failed. Must investigate error above.

I do not see "bad_alloc()" in the report, but this:

slurmstepd: error: _get_pss: ferror() indicates error on file /proc/17812/smaps


and the following error line:

...
Error, cmd: mv /pica/v5/b2016162_nobackup/Pfa/insilico_norPfa/insilico_norPfa_all/trinity_outFR/read_partitions/Fb_0/CBin_612/c61212.trinity.reads.fa.out/inchworm.K25.L25.fa.tmp /pica/v5/b2016162_nobackup/Pfa/insilico_norPfa/ins
ilico_norPfa_all/trinity_outFR/read_partitions/Fb_0/CBin_612/c61212.trinity.reads.fa.out/inchworm.K25.L25.fa 2>tmp.47497.stderr died with ret 256 at /pica/sw/apps/bioinfo/trinity/2.1.0/milou/PerlLib/Pipeliner.pm line 102
        Pipeliner::run('Pipeliner=HASH(0x2554728)') called at /pica/sw/apps/bioinfo/trinity/2.1.0/milou/util/support_scripts/../../Trinity line 2054...



Does this mean I need more memory?

thanks for your help,

Maria

Tiago Hori

unread,
Jan 13, 2017, 6:32:44 AM1/13/17
to Maripaz Celorio, trinityrnaseq-users, dinesh...@gmail.com
Maria,

You are getting an error on a move command, which likely indicates you either are running out of disk space or you don't have enough space allocated to your account. If you are in a server, the latter is likely, sysadmims restrict your access to space.

T.

Dr. Tiago S. Hori
Associate Director of Genomics 
The Center for Aquaculture Technologies 
--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-u...@googlegroups.com.
To post to this group, send email to trinityrn...@googlegroups.com.

Pablo García Fernández

unread,
Feb 13, 2017, 7:37:12 AM2/13/17
to trinityrnaseq-users, dinesh...@gmail.com
Mr Chapman I have a memory issue but paying attention to Trinity web site recommendations I can't see why. Maybe you could help me?

I tried to run (no input and output to avoid mess with large texts) these commands =

Trinity --seqType fq --SS_lib_type RF \
        --normalize_max_read_cov 50 \ ## I started with 30 but increase that to try to solve the problem
        --CPU 20 --max_memory 100G \

Trinity --seqType fq --SS_lib_type RF \
        --normalize_max_read_cov 50 \
        --CPU 20 --max_memory 100G \
        --bflyCPU 20 --bflyHeapSpaceMax 4G \

Trinity --seqType fq --SS_lib_type RF \
        --normalize_max_read_cov 50 \
        --CPU 20 --max_memory 100G \
        --bflyCPU 20 --bflyHeapSpaceMax 4G \
        --min_kmer_cov 2 \

I had ever the same error claiming for more memory. I saw this "rule" of 1M reads = (aprox.) 1 Gb RAM and I have in my both.fa (which I is the result of normalisation if I'm not wrong) 71,504,988 so 100 Gb should be enough.
If it helps to clarify, I also have 2216252750 Kmers after Inchworm (maybe there is the problem).

Here is the common error message:

-populating the kmer seed candidate list.
Kcounter hash size: 2216252750
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Error, cmd: /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Inchworm/bin//inchworm --kmers jellyfish.kmers.fa --run_inchworm -K 25 -L 25 --monitor 1   --num_threads 6  --PARALLEL_IWORM  > /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY/inchworm.K25.L25.fa.tmp 2>tmp.18850.stderr died with ret 34304 at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/PerlLib/Pipeliner.pm line 166
Pipeliner::run('Pipeliner=HASH(0xf7c120)') called at /opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Trinity line 2288
eval {...} called at /opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Trinity line 2283
main::run_inchworm('/mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMB...', '/mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMB...', 'RF', '') called at /opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Trinity line 1536
main::run_Trinity() called at /opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Trinity line 1262
eval {...} called at /opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/Trinity line 1261

If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data set or run Trinity on a server with more memory available.

** The inchworm process failed.

Do I not have enough memory? or 100Gb should be fine and I have a problem with the system.

Thank you for your attention,
Pablo

El miércoles, 26 de agosto de 2015, 21:12:55 (UTC+1), Mark Chapman escribió:

Hi, I think what you're thinking of is changing the kmer for assembly. For the normalisation the parameter essentially just changes the number of identical kmers which remain. So changing from 30 (default) to a higher value is unlikely to change your assembly (but you can try!).
The trinity website is a good starting point for more info but often comparing different assemblies of the same data isn't carried out. Check out a program called detonate which will compare your assembly to your raw data and give you a likelihood score. In theory the assembly with the highest likelihood is the best one you have.
Thanks, Mark

On 26 Aug 2015 14:11, "தினேஷ் குமார் சு" <dinesh...@gmail.com> wrote:
Tnx for the Quick reply.

i will try the normalisation with default value. 

sry for inconvenience
do u know which paper or review article suitable for that type of work. because i want to know about that values. 
now lot of people try with different k-mer and get a better result with low error. so that i want to know about that values and how that values are give better result.

i think this info will help lot of newbie work on NGS transcriptome analysis.


--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-users+unsub...@googlegroups.com.

Brian Haas

unread,
Feb 13, 2017, 8:42:51 AM2/13/17
to Pablo García Fernández, trinityrnaseq-users, தினேஷ் குமார் சு
Hi Pablo,

It looks like it wants more RAM.   If you don't have a larger machine to run it on, there are options here:


best,

~brian


To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-users+unsubscribe...@googlegroups.com.
To post to this group, send email to trinityrn...@googlegroups.com.
Visit this group at http://groups.google.com/group/trinityrnaseq-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-users+unsub...@googlegroups.com.
To post to this group, send email to trinityrnaseq-users@googlegroups.com.
Visit this group at https://groups.google.com/group/trinityrnaseq-users.

For more options, visit https://groups.google.com/d/optout.



--
--
Brian J. Haas
The Broad Institute
http://broadinstitute.org/~bhaas

 

Pablo García Fernández

unread,
Feb 13, 2017, 8:51:07 AM2/13/17
to trinityrnaseq-users, pablo...@gmail.com, dinesh...@gmail.com
Thank you Brian I'll try to run the analysis here.

Pablo

Pablo García Fernández

unread,
Mar 5, 2017, 6:15:28 AM3/5/17
to trinityrnaseq-users, pablo...@gmail.com, dinesh...@gmail.com
Hello Brian,

finally I could go a little bit further solving the memory issue but now I found some warms which scare me a little bit. I'll explain the scenario, cause of the number of user using the HCP service which I'm using I have to run Trinity assembly in several runs. 

First time I reach 40% of Trinity phase 2 Assembling Clusters of Reads in a run of 10 hours but the next time I run during 10 hours Trinity I obtain that:

Trinity Phase 2: Assembling Clusters of Reads
*
* (jump several steps done in a previous run)
*
succeeded(18184), failed(71)   11.0479% completed

My question is, this 11% is from the beginning or now I'm at some point like 51%? (40% from previous run and 11% this run?)

Second, why this time I only reach a 11% (same script same resources)

Third, what is this failed count?

Just to finish, do you recommend to erase some directory or file and start again the "Trinity Phase 2: Assembling Clusters of Reads"?? I ask about that because the first time it reaches more or less the half part of the process and without error messages.

Thank you for your time.

Pablo GF

Brian Haas

unread,
Mar 5, 2017, 7:47:00 AM3/5/17
to Pablo García Fernández, trinityrnaseq-users, தினேஷ் குமார் சு
Hi Pablo,

The percentage of jobs completed would refer to the jobs that are left to compute after resuming from an earlier run.  It basically just skips over the commands that were completed successfully earlier.

It would be useful to see error messages from the commands that failed.  Also, let's look at your Trinity command to see if it needs to be adjusted.

~brian

--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-users+unsub...@googlegroups.com.
To post to this group, send email to trinityrnaseq-users@googlegroups.com.
Visit this group at https://groups.google.com/group/trinityrnaseq-users.
For more options, visit https://groups.google.com/d/optout.

Pablo García Fernández

unread,
Mar 5, 2017, 12:16:21 PM3/5/17
to trinityrnaseq-users, pablo...@gmail.com, dinesh...@gmail.com
Thank you for your response (even in the middle of the weekend!)

This is my Trinity command:

export RNAseq_READS=/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/reads_STSM

Trinity --seqType fq --SS_lib_type RF \
        --normalize_max_read_cov 50 \
        --CPU 20 --max_memory 590G \
        --left $RNAseq_READS/CA1_R1.trimm.fq.gz,$RNAseq_READS/CA2_R1.trimm.fq.gz,$RNAseq_READS/CA3_R1.trimm.fq.gz,$RNAseq_READS/FA1_R1.trimm.fq.gz,$RNAseq_READS/FA2_R1.trimm.fq.gz,$RNAseq_READS/FA3_R1.trimm.fq.gz,$RNAseq_READS/CZ1_R1.trimm.fq.gz,$RNAseq_READS/CZ2_R1.trimm.fq.gz,$RNAseq_READS/CZ3_R1.trimm.fq.gz,$RNAseq_READS/FZ1_R1.trimm.fq.gz,$RNAseq_READS/FZ2_R1.trimm.fq.gz,$RNAseq_READS/FZ3_R1.trimm.fq.gz,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/81_16790_TTAGGC_read1.fastq.gz.PwU.qtrim.fq,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/101_16791_GATCAG_read1.fastq.gz.PwU.qtrim.fq,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/119_16792_CGTACG_read1.fastq.gz.PwU.qtrim.fq \
        --right $RNAseq_READS/CA1_R2.trimm.fq.gz,$RNAseq_READS/CA2_R2.trimm.fq.gz,$RNAseq_READS/CA3_R2.trimm.fq.gz,$RNAseq_READS/FA1_R2.trimm.fq.gz,$RNAseq_READS/FA2_R2.trimm.fq.gz,$RNAseq_READS/FA3_R2.trimm.fq.gz,$RNAseq_READS/CZ1_R2.trimm.fq.gz,$RNAseq_READS/CZ2_R2.trimm.fq.gz,$RNAseq_READS/CZ3_R2.trimm.fq.gz,$RNAseq_READS/FZ1_R2.trimm.fq.gz,$RNAseq_READS/FZ2_R2.trimm.fq.gz,$RNAseq_READS/FZ3_R2.trimm.fq.gz,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/81_16790_TTAGGC_read2.fastq.gz.PwU.qtrim.fq,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/101_16791_GATCAG_read2.fastq.gz.PwU.qtrim.fq,/mnt/lustre/scratch/home/csic/pma/pgf/RNAseq_Paralarvas/Pa_Trinity_completo/119_16792_CGTACG_read2.fastq.gz.PwU.qtrim.fq \
        --output /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/

Something wrong here?

And now the error message:

If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data s
et or run Trinity on a server with more memory available.

** The inchworm process failed.Error, no fasta file reported as: /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_F
INAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132632.trinity.reads.fa.out/chrysalis/Component_bins/Cbin0/c3
.graph.allProbPaths.fasta
Trinity run failed. Must investigate error above.
sh: tmp.57080.stderr: Cannot send after transport endpoint shutdown
Error, cmd: mv /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_13
26/c132672.trinity.reads.fa.out/inchworm.K25.L25.fa.tmp /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSE
MBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132672.trinity.reads.fa.out/inchworm.K25.L25.fa 2>tmp.57080.stderr die
d with ret 256 at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/PerlLib/Pipeliner.pm line 166
        Pipeliner::run('Pipeliner=HASH(0x1646b58)') called at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/g
cc/5.3.0/util/support_scripts/../../Trinity line 2288
        eval {...} called at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/util/support_scripts/../
../Trinity line 2283
        main::run_inchworm('/mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMB...', '/mnt/lustre/scratch
/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMB...', 'F', '') called at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.
4.0/gcc/5.3.0/util/support_scripts/../../Trinity line 1536
        main::run_Trinity() called at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/util/support_sc
ripts/../../Trinity line 1262
        eval {...} called at /mnt/EMC/Optcesga_FT2/opt/cesga/trinityrnaseq/2.4.0/gcc/5.3.0/util/support_scripts/../
../Trinity line 1261


If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data s
et or run Trinity on a server with more memory available.

** The inchworm process failed.touch: setting times of '/mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSE
MBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132666.trinity.reads.fa.out/chrysalis/inchworm.K25.L25.fa.min100.bowti
e2-build.ok': Cannot send after transport endpoint shutdown
Trinity run failed. Must investigate error above.
Error, no fasta file reported as: /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_part
itions/Fb_1/CBin_1326/c132653.trinity.reads.fa.out/chrysalis/Component_bins/Cbin0/c3.graph.allProbPaths.fasta
Trinity run failed. Must investigate error above.
Only read 0 bytes (out of 3367) from reference index file /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_AS
SEMBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132673.trinity.reads.fa.out/chrysalis/inchworm.K25.L25.fa.min100.4.b
t2
Error: Encountered internal Bowtie 2 exception (#1)
Command: /opt/cesga/bowtie2/2.2.9/gcc/5.3.0/bin/bowtie2-align-s --wrapper basic-0 --local -k 2 --threads 1 -f --sco
re-min G,46,0 -x /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_
1326/c132673.trinity.reads.fa.out/chrysalis/inchworm.K25.L25.fa.min100 /mnt/lustre/scratch/home/csic/pma/pgf/PA_Tri
nity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132673.trinity.reads.fa.out/single.fa


(ERR): bowtie2-align exited with value 1
Trinity run failed. Must investigate error above.


We are sorry, commands in file: [failed_butterfly_commands.53297.txt] failed.  :-(

Exception in thread "main" java.io.FileNotFoundException: /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132658.trinity.reads.fa.out/chrysalis/Component_bins/Cbin0/c2.graph.reads (Input/output error)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at java.io.FileInputStream.<init>(FileInputStream.java:93)
        at java.io.FileReader.<init>(FileReader.java:58)
        at TransAssembly_allProbPaths.getReadStarts(TransAssembly_allProbPaths.java:11672)
        at TransAssembly_allProbPaths.main(TransAssembly_allProbPaths.java:912)
Graph is empty. Quitting.
Graph is empty. Quitting.
Trinity run failed. Must investigate error above.


We are sorry, commands in file: [failed_butterfly_commands.53937.txt] failed.  :-(

Exception in thread "main" java.io.FileNotFoundException: /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132660.trinity.reads.fa.out/chrysalis/Component_bins/Cbin0/c15.graph.allProbPaths.fasta (Cannot send after transport endpoint shutdown)
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:101)
        at TransAssembly_allProbPaths.main(TransAssembly_allProbPaths.java:726)
Graph is empty. Quitting.
Graph is empty. Quitting.
Graph is empty. Quitting.
Graph is empty. Quitting.
Trinity run failed. Must investigate error above.

Again memory stuck? I didn't find the bad alloc advice in the middle of the error.

This time I'll pref to repeat again a clean phase 2, which files I should remove? the directories Fb_* inside the directory read_partitions? and the recursive_trinity.cmds.completed?

El domingo, 5 de marzo de 2017, 13:47:00 (UTC+1), Brian Haas escribió:
Hi Pablo,

The percentage of jobs completed would refer to the jobs that are left to compute after resuming from an earlier run.  It basically just skips over the commands that were completed successfully earlier.

It would be useful to see error messages from the commands that failed.  Also, let's look at your Trinity command to see if it needs to be adjusted.

~brian
On Sun, Mar 5, 2017 at 6:15 AM, Pablo García Fernández <pablo...@gmail.com> wrote:
Hello Brian,

finally I could go a little bit further solving the memory issue but now I found some warms which scare me a little bit. I'll explain the scenario, cause of the number of user using the HCP service which I'm using I have to run Trinity assembly in several runs. 

First time I reach 40% of Trinity phase 2 Assembling Clusters of Reads in a run of 10 hours but the next time I run during 10 hours Trinity I obtain that:

Trinity Phase 2: Assembling Clusters of Reads
*
* (jump several steps done in a previous run)
*
succeeded(18184), failed(71)   11.0479% completed

My question is, this 11% is from the beginning or now I'm at some point like 51%? (40% from previous run and 11% this run?)

Second, why this time I only reach a 11% (same script same resources)

Third, what is this failed count?

Just to finish, do you recommend to erase some directory or file and start again the "Trinity Phase 2: Assembling Clusters of Reads"?? I ask about that because the first time it reaches more or less the half part of the process and without error messages.

Thank you for your time.

Pablo GF

--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-users+unsub...@googlegroups.com.
To post to this group, send email to trinityrn...@googlegroups.com.

Brian Haas

unread,
Mar 5, 2017, 12:34:58 PM3/5/17
to Pablo García Fernández, trinityrnaseq-users, தினேஷ் குமார் சு


Hi Pablo,

It looks like there's some issue with the file system:


  sh: tmp.57080.stderr: Cannot send after transport endpoint shutdown
Error, cmd: mv /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/Fb_1/CBin_13
26/c132672.trinity.reads.fa.out/inchworm.K25.L25.fa.tmp /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSE
MBLY_fatnode/read_partitions/Fb_1/CBin_1326/c132672.trinity.reads.fa.out/inchworm.K25.L25.fa 2>tmp.57080.stderr


If you want to try to remove partially run outputs, you could run 'find' to capture the output directories and remove them:

find /mnt/lustre/scratch/home/csic/pma/pgf/PA_Trinity_FINAL_ASSEMBLY_fatnode/read_partitions/ -type d -regex ".*trinity.reads.fa.out"

If the above finds the output directories (only), you could tack on an -exec command to remove them

      --exec rm -rf {} \;

(but do this very carefully ...   as these operations can be dangerous if not done just right).

~b

Pablo García Fernández

unread,
Mar 5, 2017, 4:52:24 PM3/5/17
to trinityrnaseq-users, pablo...@gmail.com, dinesh...@gmail.com
Oh so I'll try to ask about that to technical support team of my HPC service. Thanks a lot.

Have a nice week

Pablo
Reply all
Reply to author
Forward
0 new messages