So the node definition is separate from the partition definition.
You would need to define all the GPUs as part of the node.
Partitions do not have physical characteristics, but they do have
QOS capabilities that you may be able to use. You could also use a
job_submit lua script to reject jobs that request resources you do
not want used in a particular queue.
Both would take some research to find the best approach, but I think those are the two options available that may do what you are looking for.
Brian Andrus
I think when you define the node in your slurm.conf, you could specify the different types you have and the number in the node. Then when the user submits the job, they could specify the number and type they want and that would all work in one partition. I have never done it because our nodes have the same type in them.
For example, we have V100 and P100 gpus and decided on the type names of volta and tesla
GresTypes=gpu
NodeName=compute-0-[36-43] Gres=gpu:tesla:2 Feature=gen9
NodeName=compute-4-[0-3] Gres=gpu:volta:8 Feature=gen9
The user then just uses the SBATCH directive --gpus=tesla:1 to request one P100 gpu.
This is an example from https://slurm.schedmd.com/slurm.conf.html
(e.g."Gres=gpu:tesla:1,gpu:kepler:1,bandwidth:lustre:no_consume:4G")