Star process killed

172 views
Skip to first unread message

Varun Gupta

unread,
Sep 30, 2013, 4:46:04 PM9/30/13
to rna-...@googlegroups.com
Hi Alex,
Hope you are doing well.

I actually ran this code and i have been running it quite often

#!/bin/bash

#$ -cwd -V -S /bin/bash -j y -b y -l mf=28G -l num_proc=8 -N mm9_M_GSM929774

set -o errexit -o nounset -o allexport -o pipefail

/apps1/star/2.3.1o/intel/bin/STAR --runThreadN 8 --genomeDir /cork/vgupta12/Boris_data/GENOMES/mm9_masked_STAR_GENOME/  --readFilesIn ../wgEncodeCaltechRnaSeqC2c12C3hFR2x75Th131Il200FastqRd1Rep1.fastq  ../wgEncodeCaltechRnaSeqC2c12C3hFR2x75Th131Il200FastqRd2Rep1.fastq --outSAMattributes All   --outFilterMultimapNmax 10 --outFilterMismatchNmax 6

##The result of the above command produces an output file named Aligned.out.sam and hence the lower code process it further downstream.

##in case you have more than 1 file, can comment out if you don't.  

for i in *.sam; do

##$i is [file].sam

 h=`basename $i .sam`
    samtools view -bS $i > ${h}.bam
    samtools sort ${h}.bam ${h}_sorted
    samtools index ${h}_sorted.bam ${h}_sorted.bam.bai
done

rm Aligned.out.sam Aligned.out.bam

exit 0;



I keep running this command but i got this error this time and my process was killed.

Sep 30 15:41:03 ..... Started STAR run
./script.sh: line 7: 32471 Killed                  /apps1/star/2.3.1o/intel/bin/STAR --runThreadN 8 --genomeDir /cork/vgupta12/Boris_data/GENOMES/mm9_masked_STAR_GENOME/ --readFilesIn ../wgEncodeCaltechRnaSeqC2c12C3hFR2x75Th131Il200FastqRd1Rep1.fastq ../wgEncodeCaltechRnaSeqC2c12C3hFR2x75Th131Il200FastqRd2Rep1.fastq --outSAMattributes All --outFilterMultimapNmax 10 --outFilterMismatchNmax 6



I checked the line 7 for all paths and there was no mistake in providing the path.
Moreover the Log.out file was perfectly fine.

What was wrong here???


Regards
Varun

Alexander Dobin

unread,
Oct 1, 2013, 7:03:58 PM10/1/13
to rna-...@googlegroups.com
Hi Varun,

did you run this under some kind of queuing system like sge?
My guess would be that the job was killed because it used more resources than you requested, for example it exceeded the RAM allocation.
The queuing system must be produce some log file explaining why the job was killed. If your nodes have enough RAM, I would try to increase the requested RAM by a few GB just in case.

Cheers
Alex
Reply all
Reply to author
Forward
0 new messages