segmentation fault

134 views
Skip to first unread message

Federico Ansaloni

unread,
Jun 11, 2018, 5:55:02 PM6/11/18
to rna-star
Hi,

I'm using STAR to align human PE RNAseq (101x2 bp) reads on a fasta file made up by 185,825 sequences (total length 305,601,694 nt).

I'm building the index using the following command:
STAR --runThreadN 20 --runMode genomeGenerate --genomeDir $wd --genomeFastaFiles $fasta --genomeSAindexNbases 13 --genomeChrBinNbits 10

Then I'm mapping the reads using:
STAR --outSAMunmapped None --outSAMprimaryFlag AllBestScore --outFilterScoreMin 196 --outMultimapperOrder Random --outStd SAM --runThreadN 20 --genomeDir $wd --readFilesIn $reads1 $reads2 > Aligned.sam

If I create the index and then analyze just one sample all it's working fine.

The error comes when I try to analyze at the same time different RNAseq samples on different computing nodes of the cluster using the same index file.
I have 16 samples, if I run 16 different jobs on 16 different nodes and try to map on the same index I obtain a segmentation fault error. ("Segmentation fault      (core dumped)")

In my opinion each node should load to its own memory a copy of the index and then start to perform the mapping analysis. I think this is confirmed by the Log files. In each of the 16 log files there is the sentence "Finished loading the genome". Are my assumptions wrong?

What's happening? Could you please help me?


Thanks,
Federico

Federico Ansaloni

unread,
Jun 12, 2018, 5:11:04 AM6/12/18
to rna-star
UPDATE
Using version 2.6.0c instead of 2.6.0a it works fine with no errors.

Brett Vanderwerff

unread,
Aug 19, 2018, 2:03:18 PM8/19/18
to rna-...@googlegroups.com
I saw something similar.

on 2.6.0a I was generating (human) genome with:

--runThreadN 6 \
--runMode genomeGenerate \
--genomeDir $genome_index_dir \
--genomeFastaFiles $genome_sequence_dir \
--sjdbGTFfile $genome_annotation_dir \
--sjdbOverhang 100 \
--genomeSAsparseD 2 \
--limitIObufferSize 80000000
and mapping paired end reads with:

--genomeDir $genome_index_dir \
--runThreadN 1 \
--readFilesIn $fq1 $fq2 \
--readFilesCommand zcat \
--outSAMtype BAM Unsorted \
--outFileNamePrefix $bam_output_dir$base



My speed rate was slow (about 14M/hr), and my mapped unique was not the best (~70%). Sometimes a pair of files would go to completion but often I ran into either this error:

EXITING because of fatal error: buffer size for SJ output is too small
Solution: increase input parameter --limitOutSJcollapsed

or this error like yours:

("Segmentation fault      (core dumped)")

All I did was move from using 2.6.0a to using 2.6.0c like you suggested here and both these errors went away without changing any run settings, my speed almost doubled and my mapped unique rate went up 20%
Reply all
Reply to author
Forward
0 new messages