Kubernetes Upgrade from 1.3.6 to 1.5.2 Without Bringing a HA enabled Cluster down

70 views
Skip to first unread message

krma...@gmail.com

unread,
Mar 2, 2017, 4:34:44 PM3/2/17
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com
Hi Kubernetes Users and Dev


I have a HA cluster with 1.3.6 running. I have three masters, each running apiserver, kube-scheduler and controller-manager and etcd running. I want to upgrade to 1.5.2 without bringing the cluster down.

I was thinking a couple of ways:-

Method 1: 
  1. Bring one master down and upgrade, while maintaining the quorom for etcd
  2. The new master comes up as 1.5.2 while the remaining continue to run as 1.3.6
  3. In this scenario
    1. If kubelets running as 1.3.6 try to talk to new master 1.5.2, will that work ? 
    2. Is that supported scenario ? 
    3. What about kube-proxy ? 
    4. Will the api-server running as  1.5.2 be able to talk to kubelets 
Method 3:

The other way would be to partition the cluster , and move some minions and a single master to new 1.5.2. That is not preferable imo.

 Are there other ways people can think of making this work ?

-Mayank



Filip Grzadkowski

unread,
Mar 3, 2017, 3:24:47 AM3/3/17
to krma...@gmail.com, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com
On Thu, Mar 2, 2017 at 10:34 PM, <krma...@gmail.com> wrote:
Hi Kubernetes Users and Dev


I have a HA cluster with 1.3.6 running. I have three masters, each running apiserver, kube-scheduler and controller-manager and etcd running. I want to upgrade to 1.5.2 without bringing the cluster down.

I was thinking a couple of ways:-

Method 1: 
  1. Bring one master down and upgrade, while maintaining the quorom for etcd
  2. The new master comes up as 1.5.2 while the remaining continue to run as 1.3.6
  3. In this scenario
    1. If kubelets running as 1.3.6 try to talk to new master 1.5.2, will that work ? 
Yes, we support up to 2 minor versions skew (e.g. 1.5 master can work with 1.3+ kubelets or 1.6 master can work with 1.4+ kubelets)
    1. Is that supported scenario ? 
Yes 
    1. What about kube-proxy ? 
It will work.
    1. Will the api-server running as  1.5.2 be able to talk to kubelets 
Yes, though the communication is always initiated by the kubelet.
 
Method 3:

The other way would be to partition the cluster , and move some minions and a single master to new 1.5.2. That is not preferable imo.

 Are there other ways people can think of making this work ?

-Mayank



--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/6e55f2a7-34d4-40b1-8963-ef8b19eb0b31%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

krma...@gmail.com

unread,
Mar 3, 2017, 3:49:53 AM3/3/17
to Kubernetes developer/contributor discussion, krma...@gmail.com, kubernet...@googlegroups.com
Thanks Filip for answering those. Its great information. 
-- Is this documented somewhere ? 
-- Is there a testing results published somewhere for each release ?

There might be some other considerations like, some flags in 1.3.6 which might have become default in 1.5.2 which would cause the upgrade to not go as expected. Is this documented somewhere so that when we do the upgrade, we must enable those flags in 1.5.2 for everything to work seamlessly.
What about protobuf, has the default behavior of that changed ?

Sorry about the too many questions :-)
Thanks for your help
Mayank

On Friday, March 3, 2017 at 12:24:47 AM UTC-8, Filip Grzadkowski wrote:
On Thu, Mar 2, 2017 at 10:34 PM, <krma...@gmail.com> wrote:
Hi Kubernetes Users and Dev


I have a HA cluster with 1.3.6 running. I have three masters, each running apiserver, kube-scheduler and controller-manager and etcd running. I want to upgrade to 1.5.2 without bringing the cluster down.

I was thinking a couple of ways:-

Method 1: 
  1. Bring one master down and upgrade, while maintaining the quorom for etcd
  2. The new master comes up as 1.5.2 while the remaining continue to run as 1.3.6
  3. In this scenario
    1. If kubelets running as 1.3.6 try to talk to new master 1.5.2, will that work ? 
Yes, we support up to 2 minor versions skew (e.g. 1.5 master can work with 1.3+ kubelets or 1.6 master can work with 1.4+ kubelets)
    1. Is that supported scenario ? 
Yes 
    1. What about kube-proxy ? 
It will work.
    1. Will the api-server running as  1.5.2 be able to talk to kubelets 
Yes, though the communication is always initiated by the kubelet.
 
Method 3:

The other way would be to partition the cluster , and move some minions and a single master to new 1.5.2. That is not preferable imo.

 Are there other ways people can think of making this work ?

-Mayank



--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

krma...@gmail.com

unread,
Mar 6, 2017, 4:32:47 PM3/6/17
to Kubernetes developer/contributor discussion, krma...@gmail.com, kubernet...@googlegroups.com
Ping ?
Any pointers to where this is documented would be useful to show to the team  or if it doesnt exist i can open an issue

-Mayank

Filip Grzadkowski

unread,
Mar 7, 2017, 8:46:15 AM3/7/17
to krma...@gmail.com, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com
* I don't remember if skew version support is documented somewhere; probably it is...
* All releases includes release notes that will list all the changes that cluster admin has to perform before or during the upgrade to the new version.

--
Filip

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/00d21813-21cd-43ca-94e9-485debabcb7b%40googlegroups.com.
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages