when I submit jobs to the cluster I'm using I can use something like the following (or the corresponding directives inside the script itself):qsub -l h_vmem=2G -l h_rt=00:01:00 -cwd -N foo script.shI think I saw something about the job name for executors in Nextflow docs although I couldn't find it now. But my question is about the -l h_vmem and -l h_rt directives. Nextflow has the memory directive but I'm not sure if this directory sets memory per slot (as in h_vmem) or total memory requested.
Also, the time directive would be very handy with the use of the dynamic directives evaluation feature in Nextflow. Would the time directive be the equivalent to -l h_rt in the SGE executor?
Finally, how can I tell Nextflow where to run? (Something analogous to -cwd or -wd)
process my_process {
memory 16.GB
clusterOptions = "-l h_vmem=${memory.toString().replaceAll(/[\sB]/,'')}"
....
}
results in -l h_vmem=16G in my .command.run file.
...It breaks when I try to dynamically set memory, like this:
process my_process {
memory {16.GB * task.attempt}
clusterOptions = "-l h_vmem=${memory.toString().replaceAll(/[\sB]/,'')}"
...
}
this results in -l h_vmem=_nf_script_033fec75$_run_closure1$_closure8@7e3060d8 in my .command.run file.
Seems that by multiplying the memory by task.attempt, the closure has somehow changed the memory object.
I actually made an issue for this, but I'll close it if you'd prefer.
Thanks
Owen
process my_process {
cpus 2
memory { 16.GB * task.attempt }
clusterOptions "-l h_vmem=${task.memory.toMega()/cpus}M " + clusterOptions
errorStrategy { task.exitStatus == 140 ? 'retry' : 'terminate' }
maxRetries 3
maxErrors -1
...
}
--
You received this message because you are subscribed to the Google Groups "Nextflow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nextflow+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/nextflow.
For more options, visit https://groups.google.com/d/optout.