Re: [slurm-users] [EXTERNAL] slurm-users Digest, Vol 55, Issue 5

24 views
Skip to first unread message

Jim Kavitsky

unread,
May 3, 2022, 8:57:37 PM5/3/22
to slurm...@lists.schedmd.com

Ahhhh, that was it. The failure state was persisting after the problem was fixed. This loop put all my nodes back into idle state.

for H in {01..08}; do scontrol update NodeName=sjc01enadsapp$H State=UNDRAIN; done

 

Thanks, David.

 

-jimk

 

From: slurm-users <slurm-use...@lists.schedmd.com> on behalf of slurm-use...@lists.schedmd.com <slurm-use...@lists.schedmd.com>
Date: Tuesday, May 3, 2022 at 5:20 PM
To: slurm...@lists.schedmd.com <slurm...@lists.schedmd.com>
Subject: [EXTERNAL] slurm-users Digest, Vol 55, Issue 5

WARNING: This e-mail is sent from outside the organization DO NOT CLICK on any links or open attachments unless you trust the sender

Send slurm-users mailing list submissions to
slurm...@lists.schedmd.com

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.schedmd.com/cgi-bin/mailman/listinfo/slurm-users
or, via email, send a message with subject or body 'help' to
slurm-use...@lists.schedmd.com

You can reach the person managing the list at
slurm-us...@lists.schedmd.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of slurm-users digest..."


Today's Topics:

1. Re: gres/gpu count lower than reported (David Henkemeyer)


----------------------------------------------------------------------

Message: 1
Date: Tue, 3 May 2022 14:05:45 -0700
From: David Henkemeyer <david.he...@gmail.com>
To: Slurm User Community List <slurm...@lists.schedmd.com>
Subject: Re: [slurm-users] gres/gpu count lower than reported
Message-ID:
<CABjsmAH+z=xA9_QxvRyg-1uSj_YPWz...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I have found that the "reason" field doesn't get updated after you correct
the issue. For me, its only when I move the node back to the idle state,
that the reason field is then reset. So, assuming /dev/nvidia[0-3] is
correct (I've never seen otherwise with nvidia GPUs), then try taking them
back into the idle state. Also, keep an eye on the slurmctld and slurmd
logs. They usually are quite helpful to highlight what the issue is.

David

On Tue, May 3, 2022 at 11:50 AM Jim Kavitsky <JimKa...@lucidmotors.com>
wrote:

> Hello Fellow Slurm Admins,
>
>
>
> I have a new Slurm installation that was working and running basic test
> jobs until I added gpu support. My worker nodes are now all in drain state,
> with gres/gpu count reported lower than configured (0 < 4)
>
>
>
> This is in spite of the fact that nvidia-smi reports all four A100?s as
> active on each node. I have spent a good chunk of a week googling around
> for the solution to this, and trying variants of the gpu config
> lines/restarting daemons without any luck.
>
>
>
> The relevant lines from my current config files are below. The head node
> and all workers have the same gres.conf and slurm.conf files. Can anyone
> suggest anything else I should be looking at or adding? I?m guessing that
> this is a problem that many have faced, and any guidance would be greatly
> appreciated.
>
>
>
> root@sjc01enadsapp00:/etc/slurm-llnl# grep gpu slurm.conf
>
> GresTypes=*gpu*
>
> NodeName=sjc01enadsapp0[1-8] RealMemory=2063731 Sockets=2
> CoresPerSocket=16 ThreadsPerCore=2 Gres=*gpu*:4 State=UNKNOWN
>
>
>
> root@sjc01enadsapp00:/etc/slurm-llnl# cat gres.conf
>
> NodeName=sjc01enadsapp0[1-8] Name=gpu File=/dev/nvidia[0-3]
>
>
>
>
>
>
>
> root@sjc01enadsapp00:~# sinfo -N -o "%.20N %.15C %.10t %.10m %.15P %.15G
> %.75E"
>
> NODELIST CPUS(A/I/O/T) STATE MEMORY PARTITION
> GRES
> REASON
>
> sjc01enadsapp01 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp02 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp03 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp04 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp05 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp06 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp07 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
> sjc01enadsapp08 0/0/64/64 drain 2063731 Primary*
> gpu:4 gres/gpu count reported lower than
> configured (0 < 4)
>
>
>
>
>
> root@sjc01enadsapp07:~# nvidia-smi
>
> Tue May 3 18:41:34 2022
>
>
> +-----------------------------------------------------------------------------+
>
> | NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.4
> |
>
>
> |-------------------------------+----------------------+----------------------+
>
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.
> ECC |
>
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
> M. |
>
> | | | MIG
> M. |
>
>
> |===============================+======================+======================|
>
> | 0 NVIDIA A100-PCI... On | 00000000:17:00.0 Off |
> 0 |
>
> | N/A 42C P0 49W / 250W | 4MiB / 40536MiB | 0%
> Default |
>
> | | |
> Disabled |
>
>
> +-------------------------------+----------------------+----------------------+
>
> | 1 NVIDIA A100-PCI... On | 00000000:65:00.0 Off |
> 0 |
>
> | N/A 41C P0 48W / 250W | 4MiB / 40536MiB | 0%
> Default |
>
> | | |
> Disabled |
>
>
> +-------------------------------+----------------------+----------------------+
>
> | 2 NVIDIA A100-PCI... On | 00000000:CA:00.0 Off |
> 0 |
>
> | N/A 35C P0 44W / 250W | 4MiB / 40536MiB | 0%
> Default |
>
> | | |
> Disabled |
>
>
> +-------------------------------+----------------------+----------------------+
>
> | 3 NVIDIA A100-PCI... On | 00000000:E3:00.0 Off |
> 0 |
>
> | N/A 38C P0 45W / 250W | 4MiB / 40536MiB | 0%
> Default |
>
> | | |
> Disabled |
>
>
> +-------------------------------+----------------------+----------------------+
>
>
>
>
>
> +-----------------------------------------------------------------------------+
>
> | Processes:
> |
>
> | GPU GI CI PID Type Process name GPU
> Memory |
>
> | ID ID Usage
> |
>
>
> |=============================================================================|
>
> | 0 N/A N/A 2179 G /usr/lib/xorg/Xorg
> 4MiB |
>
> | 1 N/A N/A 2179 G /usr/lib/xorg/Xorg
> 4MiB |
>
> | 2 N/A N/A 2179 G /usr/lib/xorg/Xorg
> 4MiB |
>
> | 3 N/A N/A 2179 G /usr/lib/xorg/Xorg
> 4MiB |
>
>
> +-----------------------------------------------------------------------------+
>
>
>
>
> This message and any attachments are Confidential Information, for the
> exclusive use of the addressee and may be legally privileged. Any receipt
> by anyone other than the intended addressee does not constitute a loss of
> the confidential or privileged nature of the communication. Any other
> distribution, use or reproduction is unauthorized and prohibited. If you
> are not the intended recipient, please contact the sender by return
> electronic mail and delete all copies of this communication
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20220503/935f4bf0/attachment.htm>

End of slurm-users Digest, Vol 55, Issue 5
******************************************



This message and any attachments are Confidential Information, for the exclusive use of the addressee and may be legally privileged. Any receipt by anyone other than the intended addressee does not constitute a loss of the confidential or privileged nature of the communication. Any other distribution, use or reproduction is unauthorized and prohibited. If you are not the intended recipient, please contact the sender by return electronic mail and delete all copies of this communication

Michael Robbert

unread,
May 4, 2022, 10:50:48 AM5/4/22
to Slurm User Community List

Jim,

I’m glad you got your problem solved. Here is an additional tip that will make it easier to fix in the future. You don’t need to put scrontrol into a loop, the NodeName parameter will take a node range expression. So, you can use NodeName=sjc01enadsapp[01-08]. A SysAdmin in training saw me do that the other day and it blew his mind so I thought I’d share it with you.

 

Mike Robbert

Cyberinfrastructure Specialist, Cyberinfrastructure and Advanced Research Computing

Information and Technology Solutions (ITS)

303-273-3786mrob...@mines.edu  

A close up of a sign

Description automatically generated

Our values: Trust | Integrity | Respect | Responsibility

 

From: slurm-users <slurm-use...@lists.schedmd.com> on behalf of Jim Kavitsky <JimKa...@lucidmotors.com>
Date: Tuesday, May 3, 2022 at 18:59
To: slurm...@lists.schedmd.com <slurm...@lists.schedmd.com>
Subject: Re: [slurm-users] [EXTERNAL] slurm-users Digest, Vol 55, Issue 5

CAUTION: This email originated from outside of the Colorado School of Mines organization. Do not click on links or open attachments unless you recognize the sender and know the content is safe.

Reply all
Reply to author
Forward
0 new messages