How to force Kubernetes to update deployment with a pod in every node

4,428 views
Skip to first unread message

mder...@gmail.com

unread,
Dec 4, 2017, 6:09:23 AM12/4/17
to Kubernetes user discussion and Q&A
Hi all!

I would like to know if there is a way to force Kubernetes, during a deploy, to use every node in the cluster.
The question is due some attempts that I have done where I noticed a situation like this:

- a cluster of 3 nodes
- I update a deployment with a command like: kubectl set image deployment/deployment_name my_repo:v2.1.2
- Kubernetes updates the cluster

At the end I execute kubectl get pod and I notice that 2 pods have been deployed in the same node.
So after the update, the cluster has this configuration:

- one node with 2 pods
- one node with 1 pod
- one node without any pod (totally without any workload)


Thanks for any suggestion

Itamar O

unread,
Dec 4, 2017, 6:24:36 AM12/4/17
to kubernet...@googlegroups.com

I think that if you create a service that matches the pods of the deployment, K8s will attempt to spread out the pods by default.
Also, check out "pod anti-affinity" (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature).


--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

mder...@gmail.com

unread,
Dec 4, 2017, 6:41:31 AM12/4/17
to Kubernetes user discussion and Q&A
I'm reading the documentation and it's just what I was looking for.
Many thanks!
But is there a way to create a single yaml deployment file to ensure that every pod will be deployed in a single node?
So a single file to be executed and not 2 different yaml files as in the example

Itamar O

unread,
Dec 4, 2017, 7:09:16 AM12/4/17
to kubernet...@googlegroups.com
Not sure where you have 2 YAML's (since you specify the afiinity under the deployment template spec), but if you end up with the need for multiple YAML's you can always concatenate them to a single file by separating with "---".

mder...@gmail.com

unread,
Dec 4, 2017, 8:18:47 AM12/4/17
to Kubernetes user discussion and Q&A
At https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity there is an example (perfect for my case) where 2 files yaml are used: one for redis-cache and the other for web-store.
Anyway I'll try to concatenate them.
Thanks

mder...@gmail.com

unread,
Dec 4, 2017, 8:57:48 AM12/4/17
to Kubernetes user discussion and Q&A
Sorry but now I'm facing another problem :-(
The deployment with the options podAntiAffinity/podAffinity is working but when I try to update the deployment with the command:

kubectl set image deployment/apache-deployment apache-container=xxxxxx:v2.1.2

then I get this error:

apache-deployment-7c774d67f5-l69lb No nodes are available that match all of the predicates: MatchInterPodAffinity (3).


I don't know how to fix. Maybe the podAntiAffinity option need another kind of setting to update a deployment?

Itamar O

unread,
Dec 4, 2017, 10:28:37 AM12/4/17
to kubernet...@googlegroups.com
I'm guessing you have as many replicas as you have nodes, and you used the "required" affinity policy over the "preferred" one.
If this is the case, then when you try to update the deployment (with the default upgrade strategy), the controller tries the schedule a *4th pod* (with the new image) before taking down any of the running 3 pods, failing to do so, because the anti-affinity policy will be violated.
Try using "preferred" instead of "required".

Tim Hockin

unread,
Dec 4, 2017, 11:02:11 AM12/4/17
to Kubernetes user discussion and Q&A
Would you prefer a Daemon set instead?

On Dec 4, 2017 7:28 AM, "Itamar O" <itam...@gmail.com> wrote:
I'm guessing you have as many replicas as you have nodes, and you used the "required" affinity policy over the "preferred" one.
If this is the case, then when you try to update the deployment (with the default upgrade strategy), the controller tries the schedule a *4th pod* (with the new image) before taking down any of the running 3 pods, failing to do so, because the anti-affinity policy will be violated.
Try using "preferred" instead of "required".

On Mon, Dec 4, 2017 at 3:57 PM <mder...@gmail.com> wrote:
Sorry but now I'm facing another problem :-(
The deployment with the options podAntiAffinity/podAffinity is working but when I try to update the deployment with the command:

kubectl set image deployment/apache-deployment apache-container=xxxxxx:v2.1.2

then I get this error:

apache-deployment-7c774d67f5-l69lb  No nodes are available that match all of the predicates: MatchInterPodAffinity (3).


I don't know how to fix. Maybe the podAntiAffinity option need another kind of setting to update a deployment?

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

mder...@gmail.com

unread,
Dec 4, 2017, 11:30:15 AM12/4/17
to Kubernetes user discussion and Q&A
I tried some solutions and one that is working at the moment is simply based to change my deployment.yaml every time.
I mean:

1) I have to deploy for the 1' time my application based on a pod with some containers. These pods should be deployed on every cluster node (I have 3 nodes). I did the deployment setting in the yaml file the option replicas equal to 3:

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: my-deployment
labels:
app: webpod
spec:
replicas: 3
....

The version used for one of my containers is 2.1 for example


2) I execute the deployment with the command: kubectl apply -f my-deployment.yaml


3) I get one pod for every node without problem


4) Now I want to update the deployment changing the version of the image that I use for one of my containers. So I simply change the yaml file editing 2.1 with 2.2. Then I re-launch the command: kubectl apply -f my-deployment.yaml


5) Again, I obtain one pod for every node without problem


Behavior very different if instead I use the command:
kubectl set image deployment/my-deployment my-container=xxxx:v2.2

In this case I get a situation where a node has 2 pod, a node 1 pod, last node without pod...

Anyway considering that changing the yaml file every time that I have to deploy a new version, is an acceptable constraint, is this a reliable solution or can it have negative repercussions on system stability?



Rodrigo Campos

unread,
Dec 4, 2017, 11:34:17 AM12/4/17
to kubernet...@googlegroups.com
The scheduler makes the decision trying to spread the pods on nodes as you say. But that is just a "signal", other things are taken into account (pods availability zone, in case of AWS, for example, to spread across AZs too) node's resources, etc.

So, the default will try to do that, taking into account other variables too. But it is not a hard requirement to not have 2 pods on a single node, so it can (and will) happen.

You can force that requirement in several ways, using the hostPort option for example. This is, I think, the simplest. But also using some other functionality the default sched provides (like pod affinity, etc that has been said) and you can even write your own sched for that deployment.

But scheduling is hard and unless you have a hard requirement that can't ever happen, I think you probably want just the default.

The default is quite reasonable, maybe with more resources it's more likely to happen what you want (or tuning the deployment options to first kill the pod and then create a new one). But the default in my experience, works just fine.

Also, take into account that if you add the hard requirement, some not nice side effects might happen. For example, if two pods can never run in the same node, then if some node crashes you better still have enough nodes to run all the pods in different nodes, or some pods won't be scheduled. This, of course, is not a problem if you really want them not ever to run on the same node. 
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

mder...@gmail.com

unread,
Dec 4, 2017, 11:45:16 AM12/4/17
to Kubernetes user discussion and Q&A
So re-apply the deployment.yaml is an acceptable solution considering that my only requirement is have one pod for node?

Unfortunately I have a very close due date so I would like to find the faster-simpler and quite stable solution to do a code upgrade :)

Itamar O

unread,
Dec 4, 2017, 12:31:26 PM12/4/17
to kubernet...@googlegroups.com

If your only requirements is to have one pod per node, then I think the best solution, as Tim suggested, is a Daemon Set (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/).

And yes, it's perfectly reasonable to edit and reapply YAML's.


On Mon, Dec 4, 2017, 18:45 <mder...@gmail.com> wrote:
So re-apply the deployment.yaml is an acceptable solution considering that my only requirement is have one pod for node?

Unfortunately I have a very close due date so I would like to find the faster-simpler and quite stable solution to do a code upgrade :)



mder...@gmail.com

unread,
Dec 4, 2017, 5:28:22 PM12/4/17
to Kubernetes user discussion and Q&A
thank you all for the support ;-)

Rodrigo Campos

unread,
Dec 4, 2017, 6:09:44 PM12/4/17
to kubernet...@googlegroups.com
It's working as you need? :)


On Monday, December 4, 2017, <mder...@gmail.com> wrote:
thank you all for the support ;-)

mder...@gmail.com

unread,
Dec 5, 2017, 3:51:13 AM12/5/17
to Kubernetes user discussion and Q&A
As I said before, using multiple times the command "kubectl apply -f my-deployment.yaml" (changing from time to time the image version inside the yaml) I noticed that Kubernetes never deploys 2 pod in a same node.
I tested this behavior many times so yes it's working as I need :)
If I had problems I would use (as an emergency plan) the Daemon Set as you advised me

Rodrigo Campos

unread,
Dec 5, 2017, 8:55:16 AM12/5/17
to kubernet...@googlegroups.com
Cool. Take into account that daemon set it is created to guarantee having exactly one pod per node. For example, if you had more nodes, more pods for a daemonset will be added. And the same if some crash or you reduce.

If that fits better what you want (sorry I didn't understood before), then don't hesitate to use that. It should be really similar to a deployment (the pod spec is the same, etc.)

mder...@gmail.com

unread,
Dec 7, 2017, 5:55:33 AM12/7/17
to Kubernetes user discussion and Q&A
Today during a deploy I get a pod with 2 containers -,-
I can confirm that the best solution to make sure you have only one pod per node is using the DaemonSet.
Unfortunately using the approach to reapply the deployment yaml does not guarantee that after deployment each node has only a single pod.
Anyway now everything is working properly
Bye ;-)

Rodrigo Campos

unread,
Dec 7, 2017, 9:34:42 AM12/7/17
to kubernet...@googlegroups.com
Oh, I thought you wanted on different hosts but not as many pods as hosts. If you want that's daemonset guarantees that (even if more nodes are created later, etc.)

And what Kubernetes version are you using? There is some kind of support to upgrade them in recent versions IIRC (not used that more than once, I think). Maybe a newer version will solve all :)

mder...@gmail.com

unread,
Dec 7, 2017, 12:05:34 PM12/7/17
to Kubernetes user discussion and Q&A
With Daemonset now everything is working properly.
Anyway I'm using Kubernetes 1.8.1-gke.1
Reply all
Reply to author
Forward
0 new messages