I would like to know if there is a way to force Kubernetes, during a deploy, to use every node in the cluster.
The question is due some attempts that I have done where I noticed a situation like this:
- a cluster of 3 nodes
- I update a deployment with a command like: kubectl set image deployment/deployment_name my_repo:v2.1.2
- Kubernetes updates the cluster
At the end I execute kubectl get pod and I notice that 2 pods have been deployed in the same node.
So after the update, the cluster has this configuration:
- one node with 2 pods
- one node with 1 pod
- one node without any pod (totally without any workload)
Thanks for any suggestion
I think that if you create a service that matches the pods of the deployment, K8s will attempt to spread out the pods by default.
Also, check out "pod anti-affinity" (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature).
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
kubectl set image deployment/apache-deployment apache-container=xxxxxx:v2.1.2
then I get this error:
apache-deployment-7c774d67f5-l69lb No nodes are available that match all of the predicates: MatchInterPodAffinity (3).
I don't know how to fix. Maybe the podAntiAffinity option need another kind of setting to update a deployment?
I'm guessing you have as many replicas as you have nodes, and you used the "required" affinity policy over the "preferred" one.If this is the case, then when you try to update the deployment (with the default upgrade strategy), the controller tries the schedule a *4th pod* (with the new image) before taking down any of the running 3 pods, failing to do so, because the anti-affinity policy will be violated.Try using "preferred" instead of "required".
On Mon, Dec 4, 2017 at 3:57 PM <mder...@gmail.com> wrote:
Sorry but now I'm facing another problem :-(
The deployment with the options podAntiAffinity/podAffinity is working but when I try to update the deployment with the command:
kubectl set image deployment/apache-deployment apache-container=xxxxxx:v2.1.2
then I get this error:
apache-deployment-7c774d67f5-l69lb No nodes are available that match all of the predicates: MatchInterPodAffinity (3).
I don't know how to fix. Maybe the podAntiAffinity option need another kind of setting to update a deployment?
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
1) I have to deploy for the 1' time my application based on a pod with some containers. These pods should be deployed on every cluster node (I have 3 nodes). I did the deployment setting in the yaml file the option replicas equal to 3:
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: my-deployment
labels:
app: webpod
spec:
replicas: 3
....
The version used for one of my containers is 2.1 for example
2) I execute the deployment with the command: kubectl apply -f my-deployment.yaml
3) I get one pod for every node without problem
4) Now I want to update the deployment changing the version of the image that I use for one of my containers. So I simply change the yaml file editing 2.1 with 2.2. Then I re-launch the command: kubectl apply -f my-deployment.yaml
5) Again, I obtain one pod for every node without problem
Behavior very different if instead I use the command:
kubectl set image deployment/my-deployment my-container=xxxx:v2.2
In this case I get a situation where a node has 2 pod, a node 1 pod, last node without pod...
Anyway considering that changing the yaml file every time that I have to deploy a new version, is an acceptable constraint, is this a reliable solution or can it have negative repercussions on system stability?
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Unfortunately I have a very close due date so I would like to find the faster-simpler and quite stable solution to do a code upgrade :)
If your only requirements is to have one pod per node, then I think the best solution, as Tim suggested, is a Daemon Set (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/).
And yes, it's perfectly reasonable to edit and reapply YAML's.
So re-apply the deployment.yaml is an acceptable solution considering that my only requirement is have one pod for node?
Unfortunately I have a very close due date so I would like to find the faster-simpler and quite stable solution to do a code upgrade :)
thank you all for the support ;-)