Hello,
I am trying to run SMUFIN on clusters from a HPC center. They have old openMPI version installed (OpenMPI/1.4.3, order than the recommended 1.5.X). I prefer not to install openMPI locally, and it may interrupt the whole cluster.
I am working on the chicken genome (genome size: 1GB) and sequencing depth around 18X. According the formula, it will use (1000 * 18 * 2.3)*2= 82800MB memory. Since the job was arranged to different cluster/nodes randomly, the exact memory for each node can not be determined. I requested 40 nodes and 160GB memory (see my PBS file below), and I got my job killed by memory resource utilization exceeded requested capacity.
My questions are
1. is it OK to use older version of openMPI ? Should I adjust any parameters or not ?
2. should I request more nodes to solve the problem?
thanks in advance,
kind regards,
Hongen
###########################################PBS file #######################
#!/bin/bash -login
### define resources needed:
### walltime - how long you expect the job to run
#PBS -l walltime=7:00:00
### nodes:ppn - how many nodes & cores per node (ppn) that you require
#PBS -l nodes=40:ppn=1
### mem: amount of memory that the job will need
#PBS -l mem=160gb
### load necessary modules, e.g.
module load powertools
### change to the working directory where your code is located
cd /mnt/home/hongenxu/smufin
### call your executable
mpirun -np 40 ./SMuFin --ref ref_genome/genome.fa --normal_fastq_1 normal_fastqs_1.S3 --normal_fastq_2 normal_fastqs_2.S3 --tumor_fastq_1 tumor_fastqs_1.S3 --tumor_fastq_2 tumor_fastqs_2.S3 --patient_id S3 --cpus_per_node 1
qstat -f ${PBS_JOBID}