I have 3 masters and currently problems with the binary of terraform to scale my workers back up to 3 from 1, therefore I use a workaround and use the taint of the masters to program a tolerance into the workers with:
1) Is this correct? To my understanding I don't need exact or value any other key for this to work
2) Indeed I can now after replacing the deployment see that my pod has evicted and is running on the master node.
3) How would you program a tolerance / affinity so that the pod evicts if MEMORY or CPU get scarce on the worker node ?
As it stands now the master will always be disturbed with the tolerating pod, if I were to tolerate all pods to NoSchedule on the master key, will be my worker emtpy? This can’t be the correct scenario? A bit of insight here is needed, about how this is working under the hood to achieve a balanced master-utilisation.
Thanks as usual.
-steve