Trinity error - killed at insilico_read_normalization.pl

551 views
Skip to first unread message

Eirlys Tysall

unread,
Sep 8, 2022, 5:14:09 AM9/8/22
to trinityrnaseq-users
Hello,

I am a bit stuck with my trinity run - it keeps getting killed at the same stage and I am not sure why.

My command:

Trinity \
--seqType fq \
--left /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz \
--right /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz \
--SS_lib_type RF \
--CPU 32 \
--full_cleanup \
--output furcifer_3ind_trinity \
--max_memory 30G |& tee furcifer_3ind_trinity.log

And the output:

Trinity-v2.13.2



Left read files: $VAR1 = [
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz',
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz',
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz'
        ];
Right read files: $VAR1 = [
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz',
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz',
          '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz'
        ];
Trinity version: Trinity-v2.13.2
** NOTE: Latest version of Trinity is Trinity-v2.14.0, and can be obtained at:
    https://github.com/trinityrnaseq/trinityrnaseq/releases

Thursday, September 8, 2022: 10:03:29    CMD: java -Xmx64m -XX:ParallelGCThreads=2  -jar /usr/local/bin/util/support_scripts/ExitTester.jar 0
Thursday, September 8, 2022: 10:03:30    CMD: java -Xmx4g -XX:ParallelGCThreads=2  -jar /usr/local/bin/util/support_scripts/ExitTester.jar 1


----------------------------------------------------------------------------------
-------------- Trinity Phase 1: Clustering of RNA-Seq Reads  ---------------------
----------------------------------------------------------------------------------

---------------------------------------------------------------
------------ In silico Read Normalization ---------------------
-- (Removing Excess Reads Beyond 200 Coverage --
---------------------------------------------------------------

# running normalization on reads: $VAR1 = [
          [
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz',
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz',
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz'
          ],
          [
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz',
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz',
            '/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz'
          ]
        ];


Thursday, September 8, 2022: 10:03:30    CMD: /usr/local/bin/util/insilico_read_normalization.pl --seqType fq --JM 30G  --max_cov 200 --min_cov 1 --CPU 32 --output /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/insilico_read_normalization --max_CV 10000  --SS_lib_type RF  --left /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz --right /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz --pairs_together  --PARALLEL_STATS  
-prepping seqs
Converting input files. (both directions in parallel)left file exists, nothing to doright file exists, nothing to doDone converting input files.-kmer counting.
-generating stats files
-defining normalized reads
CMD: /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats
-opening left.fa.K25.stats.sort
-opening right.fa.K25.stats.sort
-done opening files.
bash: line 1: 3933623 Killed                  /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats
Error, cmd: /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats died with ret 35072 at /usr/local/bin/util/insilico_read_normalization.pl line 795.
Error, cmd: /usr/local/bin/util/insilico_read_normalization.pl --seqType fq --JM 30G  --max_cov 200 --min_cov 1 --CPU 32 --output /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/insilico_read_normalization --max_CV 10000  --SS_lib_type RF  --left /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz --right /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz --pairs_together  --PARALLEL_STATS   died with ret 512 at /usr/local/bin/Trinity line 2863.
    main::process_cmd("/usr/local/bin/util/insilico_read_normalization.pl --seqType "...) called at /usr/local/bin/Trinity line 3416
    main::normalize("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., 200, ARRAY(0x563b672e48a8), ARRAY(0x563b672e48d8)) called at /usr/local/bin/Trinity line 3356
    main::run_normalization(200, ARRAY(0x563b672e48a8), ARRAY(0x563b672e48d8)) called at /usr/local/bin/Trinity line 1394


I have tried messing around with different memory levels as this was initially causing an error earlier in the run but setting the --max_memory 30G seemed to solve this, but I am a bit stumped with this.

Thanks for any help!

Best wishes,
Eirlys

Tiago Hori

unread,
Sep 8, 2022, 6:18:30 AM9/8/22
to Eirlys Tysall, trinityrnaseq-users
How odd. 

The Perl wrapper is passing an array to the function run_normalization instead of an array reference. Perl does not like it, you need pass a reference to the subroutine (similar to C pointers). Why is doing that, I can quite figure out. Anyone? 

T.

“If equal affection cannot be, let the more loving one be me” W.H Auden 

On Sep 8, 2022, at 6:14 AM, Eirlys Tysall <eirlys...@gmail.com> wrote:

Hello,
--
You received this message because you are subscribed to the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trinityrnaseq-u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trinityrnaseq-users/a99e47f4-cbae-4910-aa9b-0202c3006938n%40googlegroups.com.

Brian Haas

unread,
Sep 8, 2022, 9:19:43 AM9/8/22
to Tiago Hori, Eirlys Tysall, trinityrnaseq-users
This is a bit surprising.  It sounds like a memory issue.  You could try running this directly:


  /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats

just be sure you're in the output directory that contains the stats.sort files.  (to find them, run 

    find . | grep sort

in the trinity output directory

Then, watch the memory usage using 'top' to see if it's going out of control.

If you gzip and make available to me these .sort files (gzipped), I could give it a try myself and see if there's some weird issue there.

bhaas at broadinstitute dot org

best,

~brian




--
--
Brian J. Haas
The Broad Institute
http://broadinstitute.org/~bhaas

 

Eirlys Tysall

unread,
Sep 8, 2022, 4:50:45 PM9/8/22
to trinityrnaseq-users
Hi Brian,

Thanks for your reply.

I thought it might be a memory issues, but I should have around 150GB available to me - though I've set --max_memory much lower.

I tried running that directly and the memory didn't do anything crazy - it didn't move at all really.

I have sent you the gzipped .sort files, thanks very much for your help.

Best,
Eirlys

eirlys tysall

unread,
Sep 8, 2022, 4:58:55 PM9/8/22
to trinityrnaseq-users
Actually maybe it is the memory - when I check the stats after an attempted run:

Job ID: 2444615
State: FAILED (exit code 2)
Nodes: 1
Cores per node: 56
CPU Utilized: 01:02:31
CPU Efficiency: 9.47% of 10:59:52 core-walltime
Job Wall-clock time: 00:11:47
Memory Utilized: 177.26 GB
Memory Efficiency: 94.78% of 187.03 GB

Should the memory usage be that high?

You received this message because you are subscribed to a topic in the Google Groups "trinityrnaseq-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/trinityrnaseq-users/mNTp3q7RzTo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to trinityrnaseq-u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trinityrnaseq-users/46902587-0c9b-4f29-aad8-e2378378e0fcn%40googlegroups.com.

Tiago Hori

unread,
Sep 8, 2022, 5:01:31 PM9/8/22
to eirlys tysall, trinityrnaseq-users
Normalization is probably the most RAM intensive piece of the process. How many read pairs do you have?

T.

“If equal affection cannot be, let the more loving one be me” W.H Auden 

On Sep 8, 2022, at 5:58 PM, eirlys tysall <eirlys...@gmail.com> wrote:



Eirlys Tysall

unread,
Sep 9, 2022, 5:57:04 AM9/9/22
to trinityrnaseq-users
Hi,

I have around 115 million read pairs 

Thanks,
Eirlys

Eirlys Tysall

unread,
Sep 9, 2022, 6:00:14 AM9/9/22
to trinityrnaseq-users
Even when I run the job on a different partition with more memory (374 GB) I seem to get the same problem:

Nodes: 1
Cores per node: 56
CPU Utilized: 06:18:54
CPU Efficiency: 41.05% of 15:23:04 core-walltime
Job Wall-clock time: 00:16:29
Memory Utilized: 363.38 GB
Memory Efficiency: 97.15% of 374.06 GB

Tiago Hori

unread,
Sep 9, 2022, 6:09:32 AM9/9/22
to Eirlys Tysall, trinityrnaseq-users
How odd. I ran about 250 Million with 256Gb. Have you tried lowering the number of CPUs? Sometimes, too many threads combined with high max-memory will overwhelm the system by trying to get more for each thread.

T.

“If equal affection cannot be, let the more loving one be me” W.H Auden 

On Sep 9, 2022, at 6:57 AM, Eirlys Tysall <eirlys...@gmail.com> wrote:

Hi,

Eirlys Tysall

unread,
Sep 9, 2022, 6:11:40 AM9/9/22
to trinityrnaseq-users
I will try that and let you know,  thank you!

Eirlys Tysall

unread,
Sep 9, 2022, 6:40:21 AM9/9/22
to trinityrnaseq-users
Hi,

I have tried with lower numbers of CPU's (6, 15, 32) and no change still dies in the same way.

Thanks,
Eirlys
 

Brian Haas

unread,
Sep 9, 2022, 7:35:47 AM9/9/22
to Eirlys Tysall, trinityrnaseq-users
Thx for sending the files, Eirlys.  I'll take a look and get back to you shortly.

best,

~brian


Brian Haas

unread,
Sep 9, 2022, 8:08:55 AM9/9/22
to Eirlys Tysall, trinityrnaseq-users
Hi Eirlys,

So the issue here is that the left.fa.K25.stats.sort file is corrupt, so something bad happened during the unix sort step - rare but I've seen it happen before.

Solution (hopefully):  delete those .sort files and the .sort.ok files, then rerun your original Trinity command. It'll redo the sorting and assuming it wasn't a problem more upstream in the process, it'll continue on just fine.

Let's see how that goes.

best,

~brian

Eirlys Tysall

unread,
Sep 9, 2022, 8:12:58 AM9/9/22
to trinityrnaseq-users
Hi Brian,

Great thank you, I will re-try and let you know how it goes!

Best,
Eirlys

Eirlys Tysall

unread,
Sep 9, 2022, 10:48:50 AM9/9/22
to trinityrnaseq-users
So still running into an issue but have more information!

Friday, September 9, 2022: 15:14:26    CMD: /usr/local/bin/util/insilico_read_normalization.pl --seqType fq --JM 50G  --max_cov 200 --min_cov 1 --CPU 46 --output /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/insilico_read_normalization --max_CV 10000  --SS_lib_type RF  --left /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz --right /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz --pairs_together  --PARALLEL_STATS  

-prepping seqs
Converting input files. (both directions in parallel)left file exists, nothing to doright file exists, nothing to doDone converting input files.-kmer counting.
-generating stats files
-sorting each stats file by read name.
CMD: head -n1 left.fa.K25.stats > left.fa.K25.stats.sort && tail -n +2 left.fa.K25.stats | /usr/bin/sort --parallel=46 -k1,1 -T . -S 25G >> left.fa.K25.stats.sort
CMD: head -n1 right.fa.K25.stats > right.fa.K25.stats.sort && tail -n +2 right.fa.K25.stats | /usr/bin/sort --parallel=46 -k1,1 -T . -S 25G >> right.fa.K25.stats.sort
CMD finished (39 seconds)
CMD finished (41 seconds)
CMD: touch left.fa.K25.stats.sort.ok
CMD finished (0 seconds)
CMD: touch right.fa.K25.stats.sort.ok
CMD finished (0 seconds)

-defining normalized reads
CMD: /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats
-opening left.fa.K25.stats.sort
-opening right.fa.K25.stats.sort
-done opening files.
Error, line: [D00137:397:H5FHKBCXY:2:1104:13395:21848/1    7868    7439.12    1863.07    00137:397:H5FHKBCXY:2:1104:13232:21969/1    1772    1729.11    130.626    thread:4
] $VAR1 = [
          'D00137:397:H5FHKBCXY:2:1104:13395:21848/1',
          '7868',
          '7439.12',
          '1863.07',
          '00137:397:H5FHKBCXY:2:1104:13232:21969/1',
          '1772',
          '1729.11',
          '130.626',
          'thread:4'
        ];
 is lacking 5 fields: $VAR1 = [
          'acc',
          'median_cov',
          'mean_cov',
          'stdev',
          'tid'
        ];
 at /usr/local/bin/util/support_scripts/../../PerlLib/DelimParser.pm line 162, <$left_fh> line 31130215.
    DelimParser::Reader::get_row(DelimParser::Reader=HASH(0x555f3d3fb588)) called at /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl line 145
Error, cmd: /usr/local/bin/util/..//util/support_scripts//nbkc_merge_left_right_stats.pl --left left.fa.K25.stats.sort --right right.fa.K25.stats.sort --sorted > pairs.K25.stats died with ret 6400 at /usr/local/bin/util/insilico_read_normalization.pl line 795.
Error, cmd: /usr/local/bin/util/insilico_read_normalization.pl --seqType fq --JM 50G  --max_cov 200 --min_cov 1 --CPU 46 --output /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/insilico_read_normalization --max_CV 10000  --SS_lib_type RF  --left /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.left.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.left.fastq.gz --right /home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S17.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S18.right.fastq.gz,/home/eet35/rds/hpc-work/transcriptome/data/furcifer_4/reads.ALL.S46.right.fastq.gz --pairs_together  --PARALLEL_STATS   died with ret 512 at /usr/local/bin/Trinity line 2863.

    main::process_cmd("/usr/local/bin/util/insilico_read_normalization.pl --seqType "...) called at /usr/local/bin/Trinity line 3416
    main::normalize("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., 200, ARRAY(0x55e05b996890), ARRAY(0x55e05b9968a8)) called at /usr/local/bin/Trinity line 3356
    main::run_normalization(200, ARRAY(0x55e05b996890), ARRAY(0x55e05b9968a8)) called at /usr/local/bin/Trinity line 1394


I checked for read 'D00137:397:H5FHKBCXY:2:1104:13395:21848' in my input file and looks completely normal - in the left.fa file it also looks normal:

>D00137:397:H5FHKBCXY:2:1104:13395:21848/1
ACTTTGGATCAGTTCACCCAAATCCTGAAGGAAGCTGGAGACAAGCTGGTGGTGGTGGACTTCACAGCCTCCTGGTGTGGCCCCTGTAAACAGATTGGCCCAGAATTTGAGAAACTGGCGGCCTTGGCTGAAAACAAGAACGTGGTTTTCC

In the left.fa.K25.stats.sort file it looks like this:

D00137:397:H5FHKBCXY:2:1104:13394:44429/1    1850    1981.49    257.075    thread:4
D00137:397:H5FHKBCXY:2:1104:13394:70187/1    60    54.9291    12.4328    thread:1
D00137:397:H5FHKBCXY:2:1104:13394:87430/1    14    13.8189    5.47783    thread:1
D00137:397:H5FHKBCXY:2:1104:13395:21848/1    7868    7439.12    1863.07    00137:397:H5FHKBCXY:2:1104:13232:21969/1    1772    1729.11    130.626    thread:4
D00137:397:H5FHKBCXY:2:1104:13395:21848/1    7868    7439.12    1863.07    thread:3

D00137:397:H5FHKBCXY:2:1104:13395:43697/1    135103    134098    23922.1    thread:3
D00137:397:H5FHKBCXY:2:1104:13395:46582/1    748143    717510    149638    thread:2
D00137:397:H5FHKBCXY:2:1104:13395:55510/1    106    105.748    5.75142    thread:4

Is the problem that two lines are merging?

Best,
Eirlys

Brian Haas

unread,
Sep 9, 2022, 11:07:37 AM9/9/22
to Eirlys Tysall, trinityrnaseq-users
Yes, so it looks like a problem happened during the earlier generation of the stats files.

Try removing the .stats files themselves along with the .stats.ok files

Then, rerun. It'll take longer this time because it's going to regenerate those stats files, and this is more computationally intensive.

Let's see how it goes.  

best,

~b

Eirlys Tysall

unread,
Sep 10, 2022, 5:55:29 PM9/10/22
to trinityrnaseq-users
So the good news is that seems to have worked and it's getting past the issue stage now and all seems good, the bad news is I'm running into another error further down the line:

Error encountered::  <!----
CMD: mv /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out/inchworm.fa.tmp /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out/inchworm.fa 2>tmp.8310.1662846075.stderr

Errmsg:
mv: cannot stat '/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out/inchworm.fa.tmp': No such file or directory

--->

Error, cmd: mv /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out/inchworm.fa.tmp /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out/inchworm.fa 2>tmp.8310.1662846075.stderr died with ret 256  at /usr/local/bin/PerlLib/Pipeliner.pm line 187.
        Pipeliner::run(Pipeliner=HASH(0x5624500a5330)) called at /usr/local/bin/util/support_scripts/../../Trinity line 2673
        eval {...} called at /usr/local/bin/util/support_scripts/../../Trinity line 2663
        main::run_inchworm("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "F", "", 25, 0) called at /usr/local/bin/util/support_scripts/../../Trinity line 1780
        main::run_Trinity() called at /usr/local/bin/util/support_scripts/../../Trinity line 1444
        eval {...} called at /usr/local/bin/util/support_scripts/../../Trinity line 1443



If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data set or run Trinity on a server with more memory available.

** The inchworm process failed.warning, cmd: /usr/local/bin/util/support_scripts/../../Trinity --single "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa" --output "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_599/c59970.trinity.reads.fa.out" --CPU 1 --max_memory 1G --run_as_paired --SS_lib_type F --seqType fa --trinity_complete --full_cleanup --no_salmon   failed with ret: 256, going to retry.
succeeded(5248)   48.6575% completed. 

Apologies for the continued problems!

Thanks for your help.

Best,
Eirlys

Brian Haas

unread,
Sep 10, 2022, 6:31:34 PM9/10/22
to Eirlys Tysall, trinityrnaseq-users
I think it may have automatically recovered.  If it encounters failures during this stage, it tries a few more times to push it through.  If it's stressing the file system, then you can have sporadic 'hiccups' like this.   If it encounters any failures that it can't resolve on its own, then you'll see it added to a 'failure' counter along with the 'succeeded' counter and we can explore it later once the rest of the jobs complete.

best,

~b

Eirlys Tysall

unread,
Sep 11, 2022, 9:27:27 AM9/11/22
to trinityrnaseq-users
Hi Brian,

Here's the final error message :

Error encountered::  <!----
CMD: mv /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out/inchworm.fa.tmp /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out/inchworm.fa 2>tmp.101515.1662902327.stderr

Errmsg:
mv: cannot stat '/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out/inchworm.fa.tmp': No such file or directory

--->

Error, cmd: mv /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out/inchworm.fa.tmp /rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out/inchworm.fa 2>tmp.101515.1662902327.stderr died with ret 256  at /usr/local/bin/PerlLib/Pipeliner.pm line 187.
        Pipeliner::run(Pipeliner=HASH(0x56332cecf6a8)) called at /usr/local/bin/util/support_scripts/../../Trinity line 2673

        eval {...} called at /usr/local/bin/util/support_scripts/../../Trinity line 2663
        main::run_inchworm("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "F", "", 25, 0) called at /usr/local/bin/util/support_scripts/../../Trinity line 1780
        main::run_Trinity() called at /usr/local/bin/util/support_scripts/../../Trinity line 1444
        eval {...} called at /usr/local/bin/util/support_scripts/../../Trinity line 1443



If it indicates bad_alloc(), then Inchworm ran out of memory.  You'll need to either reduce the size of your data set or run Trinity on a server with more memory available.

** The inchworm process failed.warning, cmd: /usr/local/bin/util/support_scripts/../../Trinity --single "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa" --output "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinity_3ind/furcifer_3ind_trinity/read_partitions/Fb_0/CBin_203/c20376.trinity.reads.fa.out" --CPU 1 --max_memory 1G --run_as_paired --SS_lib_type F --seqType fa --trinity_complete --full_cleanup --no_salmon   failed with ret: 256, going to retry.
succeeded(12602), failed(3)   100% completed.        

We are sorry, commands in file: [FailedCommands] failed.  :-(

Error, cmd: /usr/local/bin/trinity-plugins/BIN/ParaFly -c recursive_trinity.cmds -CPU 16 -v -shuffle  died with ret 256 at /usr/local/bin/Trinity line 2863.
        main::process_cmd("/usr/local/bin/trinity-plugins/BIN/ParaFly -c recursive_trini"...) called at /usr/local/bin/Trinity line 3597
        main::run_partitioned_cmds("recursive_trinity.cmds") called at /usr/local/bin/Trinity line 2472
        main::run_recursive_trinity("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"...) called at /usr/local/bin/Trinity line 2215
        main::run_chrysalis("/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., 200, 500, "RF", "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"..., "/rds/user/eet35/hpc-work/transcriptome/data/furcifer_4/trinit"...) called at /usr/local/bin/Trinity line 1839
        main::run_Trinity() called at /usr/local/bin/Trinity line 1444
        eval {...} called at /usr/local/bin/Trinity line 1443

Trinity run failed. Must investigate error above.

Best,
Eirlys

Brian Haas

unread,
Sep 11, 2022, 9:56:30 AM9/11/22
to Eirlys Tysall, trinityrnaseq-users
ok, it looks like 3 failed.

The first thing to try is to just rerun the original command and see if they push through.

If they still fail, then try deleting the 'trinity.reads.fa.out/' directories that correspond to each of the failed jobs (will be apparent in the failed commands file).  Then, rerun the original Trinity command.  This will make it retry those jobs from their initial state instead of trying to resume them from a failed state.

Let's see how that goes.

best,

~b


Eirlys Tysall

unread,
Sep 11, 2022, 1:16:25 PM9/11/22
to trinityrnaseq-users
Hi Brian,

Deleting those directories worked, its finally done! Thanks so much for all your help, really appreciate it!

Best,
Eirlys

Brian Haas

unread,
Sep 11, 2022, 7:28:33 PM9/11/22
to Eirlys Tysall, trinityrnaseq-users
Excellent!  Glad it finally worked out.

best,

~brian

Reply all
Reply to author
Forward
0 new messages