--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-scheduling" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-scheduling+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-scheduling@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-scheduling/CAHSsm9w5dm%3DmAb6rdRtoyWicbNvr17P8u1Yh3WQonzB94VsRdA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
We discussed this at the sig-scheduling meeting and decided this made sense as a priority function. Vish pointed out that it does not help when pods do not set limits, but it still seems useful.It just occurred to me that probably we should only use this criteria to break ties among nodes with the same LeastRequestedPriority priority. In other words, spreading should always be primarily based on request, and only secondarily on limit. Do people agree?
And people will want to disable this new priority function when they use MostRequestedPriority (best-fit) rather than LeastRequestedPriority.
On Thu, Oct 12, 2017 at 11:02 AM, Avesh Agarwal <avesh...@gmail.com> wrote:
Hello,I would like to start a discussion regarding consideration of resource limits in kube scheduler. One example use case is:Assume that there are N nodes in a cluster where some N1 are with 4 cores and some N2 are with 16 cores. Assume a pod with request of 1 core and limit of 8 cores is being scheduled. In current implementation, the scheduler will not make any difference when choosing a node from these N nodes for this pod (considering other factors are same). Though it seems obvious that it would be better if the scheduler selected one from N2.Another similar example could be that N nodes in a cluster where some N1 are with 4 cores and some N2 are with 8 cores. Assume a pod with request of 1 core and limit of 16 cores is being scheduled. Even though no node could match the pod's limits ever, still it seems obvious that it would be better if the scheduler selected one from N2. Though this case may be a bit more debatable whether to choose from N1 or N2.So the point is that choosing a node that seems better fit (higher chances of fulfilling resource limit) w.r.t to pods resource limit might be preferable.So would like to discuss if having a priority function (or some other way) to consider resource limits would help?ThanksAvesh
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-scheduling" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-sch...@googlegroups.com.
To post to this group, send email to kubernetes-s...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-scheduling/CAHSsm9w5dm%3DmAb6rdRtoyWicbNvr17P8u1Yh3WQonzB94VsRdA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-scheduling" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-sch...@googlegroups.com.
To post to this group, send email to kubernetes-s...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-scheduling/CAOU1bzcudUjW1tFDgFX7S8paEaseu2Ub1%3DrLHq0G-M-QeEppWg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-scheduling/CADexmvHdzYeEP0bsJntppt7VXN6Dej3S8U%2BYbZ25B9AuAw44FQ%40mail.gmail.com.