submit_batch_job() equivalent to sbatch --cpus 4

101 views
Skip to first unread message

Alan Hoyle

unread,
Jul 6, 2022, 7:50:44 PM7/6/22
to pys...@googlegroups.com
How does one specify the number of CPU cores to request when using pyslurm.job().submit_batch_job() ?

I'm trying something like this:

mem = 32000
cpus = 4
partition = 'mypartition'
job_name = "sweet_job_name"

awesome_job_opts = {
'script': sweet_script_name,
'realmem': mem,
'cpus-per-task': cpus,
'partition': partition,
'job_name': job_name,
'output': os.path.join(job_dir, 'logs', job_name + '.o%j'),
'error': os.path.join(job_dir, 'logs', job_name + '.e%j'),
}

pyslurm.job().submit_batch_job(awesome_job_opts)

and that seems to do most of what I want when I run squeue:

$ squeue -o "jobid: %A name: %j cpus: %C ram:%m %P %t %M %o" |grep demux
jobid: 12345 name: sweet_job_name cpus: 1 ram:32000M mypartition R 1-02:51:10 (null)

I'm getting everything I want except for the number of CPUs.  What do I need to change to get that to work?  

-- 
  -  Alan Hoyle  -  al...@alanhoyle.com  -  http://www.alanhoyle.com/  -

Alan Hoyle

unread,
Jul 11, 2022, 4:03:39 PM7/11/22
to pyslurm
Answering my own question:

The problem is that it should be:

'cpus_per_task': cpus,

with underscores (_) not  dashes as I had previously:
Reply all
Reply to author
Forward
0 new messages