[slurm-users] Question about having 2 partitions that are mutually exclusive, but have unexpected interactions

90 views
Skip to first unread message

David Henkemeyer

unread,
May 12, 2022, 10:35:03 AM5/12/22
to Slurm User Community List
Question for the braintrust:

I have 3 partitions:
  • Partition A_highpri: 80 nodes
  • Partition A_lowpri: same 80 nodes
  • Partition B_lowpri: 10 different nodes

There is no overlap between A and B partitions.

Here is what I'm observing.  If I fill the queue with ~20-30k jobs for partition A_highpri, and several thousand to partition A_lowpri, then, a bit later, submit jobs to partition B_lowpri, I am observing that the Partition B jobs are queued and not running right away, and are given a pending reason of "Priority", which doesn't seem right to me.  Yes, there are higher priority jobs pending in the queue (the jobs bound for A_hi), but there aren't any higher priority jobs pending for the same partition as the Partition B jobs, so theoretically, these partition B jobs should not be held up.  Eventually, the scheduler gets around to scheduling them, but it seems to take a while for the scheduler (which is probably pretty busy dealing with job starts, job stops, etc) to figure this out.

If I schedule fewer jobs to the A partitions ( ~3k jobs ), then the scheduler schedules the PartitionB jobs much faster, as expected.  As I increase from 3k, then partition B jobs get held up longer and longer.

I can raise the priority on partition B, and that does solve the problem, but I don't want those jobs to impact the partition A_lowpri jobs.  In fact, I don't want any cross-partition influence.

I'm hoping there is a slurm parameter I can tweak to make slurm recognize that these partition B jobs shouldn't ever have a pending state of "priority".  Or to treat these as 2 separate queues.  Or something like that.  Spinning up a 2nd slurm controller is not ideal for us (uless there is a lightweight method to do it).

Thanks
David


Brian Andrus

unread,
May 12, 2022, 12:11:48 PM5/12/22
to slurm...@lists.schedmd.com

I suspect you have too low of a setting for "MaxJobCount"

MaxJobCount
              The maximum number of jobs SLURM can have in its active database
              at one time. Set the values  of  MaxJobCount  and  MinJobAge  to
              insure the slurmctld daemon does not exhaust its memory or other
              resources. Once  this  limit  is  reached,  requests  to  submit
              additional  jobs will fail. The default value is 5000 jobs. This
              value may not be reset via "scontrol reconfig".  It  only  takes
              effect  upon  restart  of  the slurmctld daemon.  May not exceed
              65533.


so if you already have (by default) 5000 jobs being considered, the remaining aren't even looked at.

Brian Andrus

David Henkemeyer

unread,
May 12, 2022, 2:32:21 PM5/12/22
to Slurm User Community List
Thanks Brian.  We have it set to 100k, which has really improved our performance on the A partition.  We queue up 50k+ jobs nightly, and see really good node utilization, so deep jobs are being considered.

Could be that we have the scheduler too busy doing certain things, that it takes a while for it to figure out that the B jobs, despite being lower priority, can be run on partition B, with nothing higher priority targeted for that partition.

My wish here would be to be able to tell the controller to spawn a separate thread, and have one thread focus only on the B partition, while the other focuses on the rest.  Or something similar.  

David

Michael Robbert

unread,
May 12, 2022, 3:10:44 PM5/12/22
to Slurm User Community List

Have you looked at the High Throughput Computing Administration Guide: https://slurm.schedmd.com/high_throughput.html

In particular, for this problem may be to look at the SchedulerParameters. I believe that the scheduler defaults to be very conservative and will stop looking for jobs to run pretty quickly.

 

Mike Robbert

Cyberinfrastructure Specialist, Cyberinfrastructure and Advanced Research Computing

Information and Technology Solutions (ITS)

303-273-3786mrob...@mines.edu  

A close up of a sign

Description automatically generated

Our values: Trust | Integrity | Respect | Responsibility

 

From: slurm-users <slurm-use...@lists.schedmd.com> on behalf of David Henkemeyer <david.he...@gmail.com>
Date: Thursday, May 12, 2022 at 12:34
To: Slurm User Community List <slurm...@lists.schedmd.com>
Subject: [External] Re: [slurm-users] Question about having 2 partitions that are mutually exclusive, but have unexpected interactions

CAUTION: This email originated from outside of the Colorado School of Mines organization. Do not click on links or open attachments unless you recognize the sender and know the content is safe.

Reply all
Reply to author
Forward
0 new messages