Upgrade experience

20 views
Skip to first unread message

Adam Brockett

unread,
Jun 8, 2018, 2:42:18 PM6/8/18
to kubevirt-dev
I know kfox11... mentioned this in passing in an earlier thread, but I was looking to get some feedback from people on their experience upgrading their test clusters.

I've been just re-applying the kubevirt.yaml (rather than deleting the old one and then applying the new one) and letting the deployments take care of upgrading the containers.  One thing I've noticed is that VM instances stay up through this process, with the virt-launcher that they were originally started with.  This is quite cool, but has anyone experienced any weirdness as a result of running a newer virt-handler on nodes with older virt-launcher containers?

In general, has anyone had a cluster running kubevirt alive over the course of a couple updates?  Any feedback or gotcha's on the experience?

I do assume that renaming the CRDs as is being discussed will result in a need to completely delete kubevirt and re-add it.

~Adam

Fabian Deutsch

unread,
Jun 8, 2018, 6:14:11 PM6/8/18
to Adam Brockett, kubevirt-dev
Hey Adam,

a good topic :)

So, to be honest - upgrading kubevirt was not one of our top items - but we need to get to it.
Thus all the updates these days are ... robust luck.

In order to do the right thing for updates (i.e. decide how we handle launcher pods [do we keep them or restart or live-migrate to new versions?]) we need some freedom to write update logic.

We were eyeing on helm, ansible, and others ... all only half baked. Now, operators seem to be an approach which might give us the features we need.

 
Sorry, nothing else I can say.

Greetings
- fabian


~Adam

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubevi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/5b29f422-fde5-4ab4-80b1-3f7a04202834%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Roman Mohr

unread,
Jun 13, 2018, 6:32:54 AM6/13/18
to kubevirt-dev
In this case yes, because we renamed the workload itself. We are right now not using a versioned API between the pods where the vms run (virt-launcher) and the host-daemon (virt-handler). Independent on how we will do the cluster-level component updates in the future (e.g. with an operator like fabian wrote), on the host you will not have to stop virt-launcher pods for updates once we are are going to be stable. virt-handler will then be able to use talk to the different virt-launcher instances via a versioned API.

Best Regards,

Roman
 
~Adam
Reply all
Reply to author
Forward
0 new messages