[slurm-users] gpu utilization of a reserved node

18 views
Skip to first unread message

Purvesh Parmar

unread,
Apr 30, 2022, 6:28:08 AM4/30/22
to slurm...@lists.schedmd.com
Hello,

We have a node given to a group that has 2 GPUs in dedicated mode by setting reservation for 6 months. We want to find out GPU hours utilization weekly utilization of that particular reserved node. The node is not in to seperate partition.
Below command does not help in showing the allocated gpu hours and alos does not show for a week duration.
sreport reservation utilization name=rizwan_res start=2022-03-28T10:00:00 end=2022-04-03T10:00:00

Please help.

Regards,
Purvesh

Purvesh Parmar

unread,
May 4, 2022, 10:05:57 PM5/4/22
to slurm...@lists.schedmd.com
Hi,

We have a node given to a group that has all the 2 GPUs in dedicated mode by setting reservation on the node  for 6 months. We want to find out GPU hours  weekly utilization of that particular reserved node. The node is not in to seperate partition.
Below command does not help in showing the allocated gpu hours and also does not show for a week duration.
sreport reservation utilization name=rizwan_res start=2022-03-28T10:00:00 end=2022-04-03T10:00:00

Please help.

Regards,
Purvesh

Greg Wickham

unread,
May 7, 2022, 5:43:43 AM5/7/22
to Slurm User Community List

Hi Purvesh,

 

With some caveats, you can do:

 

$ sacct -N <nodename> -X -S <start date> -E <end date> -P format=jobid,alloctres

 

And then post process the results with a scripting language.

 

The caveats? . . The -X above is returning the job allocation, which in your case it appears to be everything you need. However for a job or step that spans multiple nodes Slurm doesn’t save in the database what specific resources were allocated on each node.

 

“scontrol show job <jobid> -d” does display the node specific resource allocations, but this information is discarded during summarisation to Slurmdbd.

 

   -Greg

Reply all
Reply to author
Forward
0 new messages