Kubernetes Deployment with maxSurge as 1 and maxUnavailable as 25%

630 views
Skip to first unread message

krma...@gmail.com

unread,
Jan 9, 2017, 6:07:55 PM1/9/17
to Kubernetes developer/contributor discussion
With 5 nodes and running as hostNetwork(with replica count as 5) as true in a Deployment. I am unable to make use of all nodes, because if i try to Update the Deployment, it always first brings up a new Replica(thus exceeding the number of nodes) before bringing down and making use of 25% maxUnavailable configuration. This leads to Port conflicts and the new replica never gets scheduled because of hostNetwork.

Is this guaranteed , that Kubernetes Deployment Controller will always apply maxSurge before maxUnavailable  when updating Deployment ?  Can we assume this ?


Is there other ways to solve this ? I am thinking i can try by doing a maxSurge as 0


-Mayank

Clayton Coleman

unread,
Jan 9, 2017, 6:27:25 PM1/9/17
to krma...@gmail.com, Kubernetes developer/contributor discussion
Max surge 0 is better when you have physical constraints (machines, labels, etc).  MaxUnavailable will protect you from taking downtime.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/3c137910-35ef-417a-a405-1abb8191041d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michail Kargakis

unread,
Jan 9, 2017, 6:34:36 PM1/9/17
to krma...@gmail.com, Kubernetes developer/contributor discussion
The deployment should proceed with replacing the rest of the pods since you allow 1 unavailable out of 5. Doesn't it? If not then mind opening an issue? The controller loop always tries to scale up first before scaling down and it cannot know beforehand that the new pod will get stuck:/ . MaxSurge set to 0 is better in your case as Clayton already mentioned. You also might consider changing to DaemonSets (upgrades are in progress to land in 1.6)

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.

krma...@gmail.com

unread,
Jan 12, 2017, 12:12:51 AM1/12/17
to Kubernetes developer/contributor discussion, krma...@gmail.com
Thanks Clayton.
Michail, yes with maxSurge as 1, the first extra one becomes pending due to port conflict, but the maxUnavailable should cause one to go down, but then 1 is already pending, so not sure if this could lead to a deadlock. I will try to repro this again and report back or open a issue with my findings.
Yes we are waiting for DaemonSets upgrades.

For maxSurge as 1, and maxUnavailable as 1 out of 5, if scaling up always happens first, and the first scaled up pod becomes pending, would the scaling down happen or block because of that ?
-Mayank


On Monday, January 9, 2017 at 3:34:36 PM UTC-8, Michail Kargakis wrote:
The deployment should proceed with replacing the rest of the pods since you allow 1 unavailable out of 5. Doesn't it? If not then mind opening an issue? The controller loop always tries to scale up first before scaling down and it cannot know beforehand that the new pod will get stuck:/ . MaxSurge set to 0 is better in your case as Clayton already mentioned. You also might consider changing to DaemonSets (upgrades are in progress to land in 1.6)
On Tue, Jan 10, 2017 at 12:07 AM, <krma...@gmail.com> wrote:
With 5 nodes and running as hostNetwork(with replica count as 5) as true in a Deployment. I am unable to make use of all nodes, because if i try to Update the Deployment, it always first brings up a new Replica(thus exceeding the number of nodes) before bringing down and making use of 25% maxUnavailable configuration. This leads to Port conflicts and the new replica never gets scheduled because of hostNetwork.

Is this guaranteed , that Kubernetes Deployment Controller will always apply maxSurge before maxUnavailable  when updating Deployment ?  Can we assume this ?


Is there other ways to solve this ? I am thinking i can try by doing a maxSurge as 0


-Mayank

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

Michail Kargakis

unread,
Jan 12, 2017, 3:50:11 AM1/12/17
to krma...@gmail.com, Kubernetes developer/contributor discussion
> For maxSurge as 1, and maxUnavailable as 1 out of 5, if scaling up always happens first, and the first scaled up pod becomes pending, would the scaling down happen or block because of that ?

Scaling down should happen since you allow one unavailable.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/f9084041-586d-495f-bede-09cce094269d%40googlegroups.com.
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages