--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/2e9a9bc4-cd9a-4b2f-bc22-d827d9814095%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
What, if anything, would cause a pod to get rescheduled on another node? I can think of at least three potential candidates:
* container repeatedly exits with non-zero exit code
* container repeatedly fails readiness health check
* container repeatedly fails liveliness health check
I could have sworn k8s rescheduled on the first condition, but I have a pod with 1k restarts, so pretty sure that's not true. I also could have sworn I read about this but can't find it mentioned in the documentation anymore. I should mention that I'm using deployments to schedule my pods.
In my case, my pod binds to a port in the unreserved range. One on VM I was unlucky enough to have another process bind that port and the pod repeatedly failed to bind, causing it to restart.
There are other remedies to this, but why not try rescheduling this pod on another node? There are lots of things that can happen to a single node that might cause pod failure: disk space, network connectivity, etc.
Try the same pod on another node in your cluster and it will work.
--
The things that move pods off nodes:1. User or admin manually deletes the pod2. Admin drains a node (which just delete pods)In the future there may be:3. Rescheduler - a component that detects poorly scheduled nodes and corrects them by deleting podsIf this happens in the context of a deployment, the readiness check and the health check are part of what would detect the pod is bad and start it somewhere else. If you set a readiness check, your "bound to a port" case would result in the pod failing and the deployment leaving old pods around (until you resolved the issue).
On Tue, Aug 30, 2016 at 12:42 PM, Matt Hughes <hughe...@gmail.com> wrote:
What, if anything, would cause a pod to get rescheduled on another node? I can think of at least three potential candidates:
* container repeatedly exits with non-zero exit code
* container repeatedly fails readiness health check
* container repeatedly fails liveliness health check
I could have sworn k8s rescheduled on the first condition, but I have a pod with 1k restarts, so pretty sure that's not true. I also could have sworn I read about this but can't find it mentioned in the documentation anymore. I should mention that I'm using deployments to schedule my pods.
In my case, my pod binds to a port in the unreserved range. One on VM I was unlucky enough to have another process bind that port and the pod repeatedly failed to bind, causing it to restart. There are other remedies to this, but why not try rescheduling this pod on another node? There are lots of things that can happen to a single node that might cause pod failure: disk space, network connectivity, etc. Try the same pod on another node in your cluster and it will work.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/2e9a9bc4-cd9a-4b2f-bc22-d827d9814095%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CAH16ShJDcOeguu4FYBdx5cj-DdUiu%3DTQB%2BEvHJcYRYkCfScs3A%40mail.gmail.com.
Thanks
--