Hello,
I'm running Slurm 25.05 on a heterogeneous cluster (several kind of GPU
in the same node) with AutoDetect=nvml and shared mode. When submitting
a job with `#SBATCH --gres=gpu:1`, CUDA_VISIBLE_DEVICES is correctly set
to a single and valid free GPU index but `scontrol show jobid` does not
report any detail about GPU allocation.
How can I retrieve the GPU indices assigned to running jobs (reflecting
CUDA_VISIBLE_DEVICES) in shared mode? Is there a Slurm command or
configuration to enable tracking of these indices on a heterogeneous
cluster?
The goal is to help automatic choice of the free and appropriate gpu
according to the job's needs in order to save bigger gpus for bigger
jobs at the time of submitting.
Thanks
--
slurm-users mailing list --
slurm...@lists.schedmd.com
To unsubscribe send an email to
slurm-us...@lists.schedmd.com