Hi,
STAR quit due to /tmp filling up during BAM sorting with a dozen or so files.
Is there some way to avoid having to run my job again and just pick up with where BAM sorting left off?
I'll either use another /scratch on my network (which will be slower) or extend /tmp (which I've just cleaned up).
Below is my full command and full trace, if it helps you to help me recover from this....
Thanks,
Malcolm
nohup STAR \
--runThreadN 16 \
--twopassMode Basic \
--twopass1readsN -1 \
--outFileNamePrefix germline/STAR/out/ \
--genomeDir germline/STAR/genome_${ReadLength} \
--genomeLoad NoSharedMemory \
--runMode alignReads \
--limitBAMsortRAM 20000000000 \
--readFilesIn ${paths} \
--readFilesCommand zcat \
--outReadsUnmapped Fastx \
--outSAMtype BAM SortedByCoordinate \
--outSAMreadID Number \
--outWigType bedGraph \
--outWigStrand Unstranded \
--outWigNorm RPM \
--outFilterMultimapNmax 1 \
--outFilterMultimapScoreRange 1 \
--outSAMprimaryFlag OneBestScore \
--limitSjdbInsertNsj 2000000 \
--limitOutSJcollapsed 2000000 \
--outSJfilterCountUniqueMin 5 2 2 2 \
--outSJfilterCountTotalMin 999999 999999 999999 999999 \
--alignSJoverhangMin 10 \
--alignSJDBoverhangMin
--outTmpDir $(mktemp -d -u) &
Nov 30 11:44:06 ..... Started STAR run
Nov 30 11:44:06 ..... Loading genome
Nov 30 11:44:21 ..... Started 1st pass mapping
Nov 30 11:45:19 ..... Started STAR run
Nov 30 11:45:19 ..... Loading genome
Nov 30 11:45:34 ..... Started 1st pass mapping
Nov 30 13:44:57 ..... Finished 1st pass mapping
Nov 30 13:45:01 ..... Inserting junctions into the genome indices
Nov 30 13:49:05 ..... Started mapping
Nov 30 16:46:31 ..... Started sorting BAM
EXITING because of FATAL ERROR: number of bytes expected from the BAM bin does not agree with the actual size on disk:
3028299982 1154658304 14
Nov 30 16:46:32 ...... FATAL ERROR, exiting