--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/e65f7fb6-397a-4894-80b7-d216bf582db2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/dacbd290-8794-4c30-87e5-6d470515beb5%40googlegroups.com.To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/dacbd290-8794-4c30-87e5-6d470515beb5%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/52738f4f-1e52-44df-a33d-c16da1935cc8%40googlegroups.com.To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
Thanks DavidThe documentation here is https://kubernetes.io/docs/user-guide/node-selection/ says the following:-" The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y.” Y is expressed as a LabelSelector with an associated list of namespaces (or “all” namespace"So i interpreted the pod anti affinity rules as "this pod should not run in X(failure zone) if that zone is already running one or more pods that meet rule Y(the label selector for the pods)" which is why i thought it doesnt allow running more than one pod per node of the same type. I guess the way you describe above is more generic than the description in the above documentation which should be updated :-).
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/75b852bb-eff0-476b-9100-189ded52ea25%40googlegroups.com.
Yes i will definitely make a PR once i understand it well enough
More questions :-)
-thanks for the PR. Thats awesome, that we have a PR that will allow to specify a number with requiredDuringSchedulingIgnoredDuringExecution anti-affinity, rather than it always being 1. One thing not clear to me is ,are you saying that PR is specifically for requiredDuringSchedulingIgnoredDuringExecution and not for preferredDuringSchedulingIgnoredDuringExecution. Why cant we specify a number per zone for pod anti affinity with preferredDuringSchedulingIgnoredDuringExecution ?
- Is the default out of the box scheduling using the labels
failure-domain.beta.kubernetes.io supposed to work as expected in 1.5.3 as well.
On My test on a GCE cluster with 5 nodes and one master, i made master as schedulable as well.
Then i added 3 fault zones with two nodes each. When i scale a Deployment from 6 to 9, i see that out of 3 nodes, 2 go to the same zone.
When i do the same test without master, but only with minions, then each set of 3 replicas go to a different zone.
Is there anything special about the master that changes the logic. I am looking at code selector_spreading.go
-Mayank
Yes i will definitely make a PR once i understand it well enough
More questions :-)
-thanks for the PR. Thats awesome, that we have a PR that will allow to specify a number with requiredDuringSchedulingIgnoredDuringExecution anti-affinity, rather than it always being 1. One thing not clear to me is ,are you saying that PR is specifically for requiredDuringSchedulingIgnoredDuringExecution and not for preferredDuringSchedulingIgnoredDuringExecution.
Why cant we specify a number per zone for pod anti affinity with preferredDuringSchedulingIgnoredDuringExecution ?
- Is the default out of the box scheduling using the labels
failure-domain.beta.kubernetes.io supposed to work as expected in 1.5.3 as well.
On My test on a GCE cluster with 5 nodes and one master, i made master as schedulable as well.
Then i added 3 fault zones with two nodes each. When i scale a Deployment from 6 to 9, i see that out of 3 nodes, 2 go to the same zone.
When i do the same test without master, but only with minions, then each set of 3 replicas go to a different zone.
Is there anything special about the master that changes the logic. I am looking at code selector_spreading.go
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/bd799c28-5231-4328-8239-9b73a025b251%40googlegroups.com.