[slurm-users] Constraint multiple counts not working

282 views
Skip to first unread message

Jeffrey T Frey

unread,
Dec 16, 2020, 12:38:52 PM12/16/20
to Slurm User Community List
On a cluster running Slurm 17.11.8 (cons_res) I can submit a job that requests e.g. 2 nodes with unique features on each:


$ sbatch --nodes=2 --ntasks-per-node=1 --constraint="[256GB*1&192GB*1]" …


The job is submitted and runs as expected:  on 1 node with feature "256GB" and 1 node with feature "192GB."  A similar job on a cluster running 20.11.1 (cons_res OR cons_tres, tested with both) fails to submit:


sbatch: error: Batch job submission failed: Requested node configuration is not available


I enabled debug5 output with NodeFeatures:


[2020-12-16T08:53:19.024] debug:  JobId=118 feature list: [512GB*1&768GB*1]
[2020-12-16T08:53:19.025] NODE_FEATURES: _log_feature_nodes: FEAT:512GB COUNT:1 PAREN:0 OP:XAND ACTIVE:r1n[00-47] AVAIL:r1n[00-47]
[2020-12-16T08:53:19.025] NODE_FEATURES: _log_feature_nodes: FEAT:768GB COUNT:1 PAREN:0 OP:END ACTIVE:r2l[00-31] AVAIL:r2l[00-31]
[2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: feature:512GB feature_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] work_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] tmp_bitmap:r1n[00-47] count:1
[2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: feature:768GB feature_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] work_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] tmp_bitmap:r2l[00-31] count:1
[2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: NODES:r1n[00-47],r2l[00-31],r2x[00-10] HAS_XOR:T status:No error
[2020-12-16T08:53:19.025] select/cons_tres: _job_test: SELECT_TYPE: test 0 pass: test_only
[2020-12-16T08:53:19.026] debug2: job_allocate: setting JobId=118_* to "BadConstraints" due to a flaw in the job request (Requested node configuration is not available)
[2020-12-16T08:53:19.026] _slurm_rpc_submit_batch_job: Requested node configuration is not available


My syntax agrees with the 20.11.1 documentation (online and man pages) so it seems correct — and it works fine in 17.11.8.  Any ideas?



::::::::::::::::::::::::::::::::::::::::::::::::::::::
 Jeffrey T. Frey, Ph.D.
 Systems Programmer V / Cluster Management
 Network & Systems Services / College of Engineering
 University of Delaware, Newark DE  19716
 Office: (302) 831-6034  Mobile: (302) 419-4976
::::::::::::::::::::::::::::::::::::::::::::::::::::::

Weijun Gao

unread,
Dec 16, 2020, 2:03:49 PM12/16/20
to Slurm User Community List
Hi,

Say if I have a Slurm node with 1 x GPU and 112 x CPU cores, and:

    1) there is a job running on the node using the GPU and 20 x CPU cores

    2) there is a job waiting in the queue asking for 1 x GPU and 20 x
CPU cores

Is it possible to a) let a new job asking for 0 x GPU and 20 x CPU cores
(safe for the queued GPU job) start immediately; and b) let a new job
asking for 0 x GPU and 100 x CPU cores (not safe for the queued GPU job)
wait in the queue? Or c) is it doable to put the node into two Slurm
partitions, 56 CPU cores to a "cpu" partition, and 56 CPU cores to a
"gpu" partition, for example?

Thank you in advance for any suggestions / tips.

Best,

Weijun

===========
Weijun Gao
Computational Research Support Specialist
Department of Psychology, University of Toronto Scarborough
1265 Military Trail, Room SW416
Toronto, ON M1C 1M2
E-mail: weiju...@utoronto.ca


Renfro, Michael

unread,
Dec 16, 2020, 2:55:08 PM12/16/20
to Slurm User Community List

We have overlapping partitions for GPU work and some kinds non-GPU work (both large memory and regular memory jobs).

 

For 28-core nodes with 2 GPUs, we have:

 

PartitionName=gpu MaxCPUsPerNode=16 … Nodes=gpunode[001-004]

PartitionName=any-interactive MaxCPUsPerNode=12 … Nodes=node[001-040],gpunode[001-004]

PartitionName=bigmem MaxCPUsPerNode=12 … Nodes=gpunode[001-003]

PartitionName=hugemem MaxCPUsPerNode=12 … Nodes=gpunode004

 

Worst case, non-GPU jobs could reserve up to 24 of the 28 cores on a GPU node, but only for a limited time (our any-interactive partition has a 2 hour time limit). In practice, it's let us use a lot of otherwise idle CPU capacity in the GPU nodes for short test runs.

 

From: slurm-users <slurm-use...@lists.schedmd.com>
Date: Wednesday, December 16, 2020 at 1:04 PM
To: Slurm User Community List <slurm...@lists.schedmd.com>
Subject: [slurm-users] using resources effectively?

External Email Warning

This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.

________________________________

Weijun Gao

unread,
Dec 16, 2020, 7:27:39 PM12/16/20
to Slurm User Community List, Renfro, Michael

Thanks you Michael!

I've tried the following example:

    NodeName=gpunode01 Gres=gpu:1 Sockets=2 CoresPerSocket=28 ThreadsPerCore=2 State=UNKNOWN RealMemory=380000
    PartitionName=gpu MaxCPUsPerNode=56 MaxMemPerNode=190000 Nodes=gpunode01 Default=NO MaxTime=1-0 State=UP
    PartitionName=cpu MaxCPUsPerNode=56 MaxMemPerNode=190000 Nodes=gpunode01 Default=YES MaxTime=1-0 State=UP

1) So when the system is idling, the following "gpu" job will start immediately ("gpu" partition, 1 GPU, 20 CPUs):

    srun -p gpu --gpus=1 -c 20 --pty bash -i

2) If I run the same command again, it will be queued ... this is normal ("gpu" partition, 1 GPU, 20 CPUs):

    srun -p gpu --gpus=1 -c 20 --pty bash -i

3) Then the following "cpu" job will be queued too ("cpu" partition, 20 x CPUs):

    srun -p cpu --gpus=0 -c 20 --pty bash -i

Is there a way to let the "cpu" job run instead of waiting?

Any suggestions?

Thanks again,

Weijun


    
On 12/16/2020 2:54 PM, Renfro, Michael wrote:
EXTERNAL EMAIL:

Loris Bennett

unread,
Dec 17, 2020, 2:09:10 AM12/17/20
to Slurm User Community List
Hi Jeffrey,

Jeffrey T Frey <fr...@udel.edu> writes:

> On a cluster running Slurm 17.11.8 (cons_res) I can submit a job that requests e.g. 2 nodes with unique features on each:
>
> $ sbatch --nodes=2 --ntasks-per-node=1 --constraint="[256GB*1&192GB*1]" …
>
> The job is submitted and runs as expected: on 1 node with feature
> "256GB" and 1 node with feature "192GB." A similar job on a cluster
> running 20.11.1 (cons_res OR cons_tres, tested with both) fails to
> submit:
>
> sbatch: error: Batch job submission failed: Requested node configuration is not available
>
> I enabled debug5 output with NodeFeatures:
>
> [2020-12-16T08:53:19.024] debug: JobId=118 feature list: [512GB*1&768GB*1]
> [2020-12-16T08:53:19.025] NODE_FEATURES: _log_feature_nodes: FEAT:512GB COUNT:1 PAREN:0 OP:XAND ACTIVE:r1n[00-47] AVAIL:r1n[00-47]
> [2020-12-16T08:53:19.025] NODE_FEATURES: _log_feature_nodes: FEAT:768GB COUNT:1 PAREN:0 OP:END ACTIVE:r2l[00-31] AVAIL:r2l[00-31]
> [2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: feature:512GB feature_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] work_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] tmp_bitmap:r1n[00-47] count:1
> [2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: feature:768GB feature_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] work_bitmap:r1n[00-47],r2l[00-31],r2x[00-10] tmp_bitmap:r2l[00-31] count:1
> [2020-12-16T08:53:19.025] NODE_FEATURES: valid_feature_counts: NODES:r1n[00-47],r2l[00-31],r2x[00-10] HAS_XOR:T status:No error
> [2020-12-16T08:53:19.025] select/cons_tres: _job_test: SELECT_TYPE: test 0 pass: test_only
> [2020-12-16T08:53:19.026] debug2: job_allocate: setting JobId=118_* to "BadConstraints" due to a flaw in the job request (Requested node configuration is not available)
> [2020-12-16T08:53:19.026] _slurm_rpc_submit_batch_job: Requested node configuration is not available
>
> My syntax agrees with the 20.11.1 documentation (online and man pages) so it seems correct — and it works fine in 17.11.8. Any ideas?

We don't use features, so I am only guessing that you have defined the
features incorrectly. Do you have something like

Feature=512GB,768GB

?

However, you might want to consider memory as a consumable resource and
allow Slurm to find available nodes, rather than defining the maximum
memory as a feature.

Regards

Loris

--
Dr. Loris Bennett (Hr./Mr.)
ZEDAT, Freie Universität Berlin Email loris....@fu-berlin.de

Reply all
Reply to author
Forward
0 new messages