--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/ac09c90b-1e32-4b2f-9530-7f4a6fb77816o%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kuberne...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/042deaac-509d-4c2c-afd4-9b89cc2eb8a9o%40googlegroups.com.
Hello Abdullah,Thanks for your response.So our plan is to support multi-tenancy and have diverse workload being handled by k8s rather than having a one cluster manager for stateless workload and a different one for batch workload. We are currently reviewing our options for tackling this situation and our latest POCs suggest that we have multi-schedulers handling different workloads. Your suggestion of having different subsets of nodes being managed by different schedulers sounds promising but I was wondering how else we can handle resource conflicts if we don't implement such segregation of nodes.
--Best,Goutham
On Sunday, 21 June 2020 20:18:36 UTC-5, Abdullah Gharaibeh wrote:Hi, there is no builtin mechanism to prevent such race conditions when more than one scheduler process is running in the cluster. Kubelet however will not admit a pod unless it has enough resources to run it.Usually, when running multiple scheduler processes on the cluster, the general recommendation is to have each scheduler manage a different subset of the nodes, you can do that using taints/tolerations.Note that we came up with scheduling profiles in 1.18 which will allow a single default-scheduler process (or a custom one running with your own framework Plugins) to have different configurations chosen via Pod.Spec.SchedulerName. The different profiles are run by a single process and share the same in-memory cluster state, and so you will not be faced with race conditions. However, I am not sure if the spark support you are looking for can be done as framework plugins.On Sun, Jun 21, 2020 at 7:43 PM Goutham Reddy Kotapalle <goutam...@gmail.com> wrote:Hello Everyone,--
I am currently conducting research to add support for spark on k8s in my production ready k8s cluster. I came across projects such as kube-batch and volcano. I have a couple of questions as a part of the same - Could someone please help me understand how race conditions are handled when running multiple schedulers in a scenario where both of them are completing for resources trying to scheduled their respective incoming pods? Does Kubernetes have a mechanism in place to handle such scenarios or do deploy our own conflict resolution strategy?Any advice much appreciated! Thanks!
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/ac09c90b-1e32-4b2f-9530-7f4a6fb77816o%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/042deaac-509d-4c2c-afd4-9b89cc2eb8a9o%40googlegroups.com.