Live Migration for virtctl's features

95 views
Skip to first unread message

Victor Toso

unread,
Mar 4, 2021, 6:40:35 AM3/4/21
to kubevirt-dev
Hi,

I'm interested in implementing support to seamless live migration
to the USB redirection feature (wip pr here [0]) but I think this
could also be somewhat possible or beneficial to other features
from virtctl like console and vnc.

This email is a small brain-dump from someone still fairly new to
kubernetes and KubeVirt's code base. Please do feel free to
correct me or recommend me something else where you see fit.

The design is a bit tricky imho, because the client does not have
a channel of communication to receive, act and reply during the
stages of Live Migration. The tricky part is how to handle the
timing of switching to the new host/node and being sure that
read/write buffers in virt-handler stay in sync when
reconnecting.

I'm currently thinking of using the websocket's disconnection
error as a way to trigger client's reconnection, such as error
301 'Moved Permanently' [1]. At that time, each of virtctl's
features compatible with migration could connect again, in the
new Node. This is the simplest approach for a POC that I could
think of.

Another possibility is to have a new (websocket) connection for
events like migration-start and migration-end. Knowing the events
as soon as they happen plus connecting to the new node while VMI
in new node is still paused, might make the transition smoother.
That's more or less how we do in Spice.

Thanks for reading,
Victor

[0] https://github.com/kubevirt/kubevirt/pull/4089#
[1] https://tools.ietf.org/html/rfc7231#page-56
signature.asc

dvo...@redhat.com

unread,
Mar 4, 2021, 11:40:02 AM3/4/21
to kubevirt-dev
On Thursday, March 4, 2021 at 6:40:35 AM UTC-5 Victor Toso wrote:
Hi,

I'm interested in implementing support to seamless live migration
to the USB redirection feature (wip pr here [0]) but I think this
could also be somewhat possible or beneficial to other features
from virtctl like console and vnc.

This email is a small brain-dump from someone still fairly new to
kubernetes and KubeVirt's code base. Please do feel free to
correct me or recommend me something else where you see fit.

The design is a bit tricky imho, because the client does not have
a channel of communication to receive, act and reply during the
stages of Live Migration. The tricky part is how to handle the
timing of switching to the new host/node and being sure that
read/write buffers in virt-handler stay in sync when
reconnecting.

I'm currently thinking of using the websocket's disconnection
error as a way to trigger client's reconnection, such as error
301 'Moved Permanently' [1]. At that time, each of virtctl's
features compatible with migration could connect again, in the
new Node. This is the simplest approach for a POC that I could
think of.

for the PoC, i'd just cause the connection to fail during the migration and leave it up to the client using virtctl to figure out what to do as far as reconnecting or whatever. This is a very complex issue that seems to me like something you'd want to tackle as a followup if you can get the usb streaming merged. 

Victor Toso

unread,
Mar 4, 2021, 12:16:36 PM3/4/21
to dvo...@redhat.com, kubevirt-dev
Hi,
Yes, this is definitely a followup otherwise the original pr
would get too big/complex.

>
>
> >
> >
> > Another possibility is to have a new (websocket) connection for
> > events like migration-start and migration-end. Knowing the events
> > as soon as they happen plus connecting to the new node while VMI
> > in new node is still paused, might make the transition smoother.
> > That's more or less how we do in Spice.
> >
> > Thanks for reading,
> > Victor
> >
> > [0] https://github.com/kubevirt/kubevirt/pull/4089#
> > [1] https://tools.ietf.org/html/rfc7231#page-56
> >
>
> --
> You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/e06a59be-cf03-4171-bfdd-b0b2c5c3ff80n%40googlegroups.com.

signature.asc

John Snow

unread,
Mar 8, 2021, 12:19:20 PM3/8/21
to Victor Toso, dvo...@redhat.com, kubevirt-dev
Can you outline for me (briefly) the way the architecture works?

You've got a client on a user's workstation that connects to ... a web
socket hosted by ... which process, in which context?

Some questions about that:

(1) How do you launch the client on the workstation? Is it a CLI
invocation? What parameters does it need?

(2) How does VM configuration work? In what order do various host and
client processes need to launch for it to work?

(3) On live migration (initiated by whom?), the server goes away at some
point (I assume), and you want to have the client be able to pivot to
the new destination, yes?

Victor Toso

unread,
Mar 9, 2021, 11:16:44 AM3/9/21
to John Snow, kubevirt-dev
Hey,

(grab a coffee)
Sure! I'll reply in a different order than you have asked but let
me know if something is not clear.

> You've got a client on a user's workstation that connects to ... a web
> socket hosted by ... which process, in which context?

First, the overall architecture is that KubeVirt has a CLI tool
called virtctl. You can use it as is (./virtctl) or as Krew
plugin. The basic information is here [0].

Virtctl can be used for a few features like connecting to the
remote VM with VNC. It makes it fairly easy as you only need to
know the VM's name, e.g: virtctl vnc $vmi_name

( Note that virtctl does not implement a VNC client, instead it
will launch one like remote-viewer or tigervnc (if you have it
in your $PATH )

The websocket part starts with a simple HTTP/GET first at the
subresource (vnc) of the given VM (vmi-alpine-efi)

/apis/subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/vmi-alpine-efi/vnc

This ends up being handled by virt-handler component, which runs
in the node that our VM is running. virt-handler is the one that
connects to the QEMU's socket for that given subresource [1] (for
vnc, in that Pod: /var/run/kubevirt-private/<unique-hash>/virt-vnc)

virt-handler is the one to confirm/proceed with the upgrade to
websocket.

I'm not 100% sure if there are more components proxying data
around, I'm positive that the request is being routed by virt-api
[2] so in short:

QEMU <-> .../virt-vnc <-> virt-handler <-> virt-api <-> virtctl
<-> remote-viewer

[0] https://kubevirt.io/user-guide/operations/virtctl_client_tool/
[1] https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-handler/rest/console.go#L61
[2] https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-api/api.go#L263

I thinks this more or less reply to your first question. I'm
using VNC here as this is already in stable branch. The USB
redirection features is quite similar, instead of launching
remote-viewer we launch the usbredirect binary [3] (that should
be part of usbredir 0.9.0 [next] release)

[3] https://gitlab.freedesktop.org/spice/usbredir/-/merge_requests/2

> (2) How does VM configuration work? In what order do various
> host and client processes need to launch for it to work?

Based on the last iteration of the WIP PR [4], the VMI yaml file
needs to declare the USB slots and each slot should be associated
with a name (suggested by Fabian [5]) so, under devices you can
add up to 4 slots like so:

```yaml
spec:
domain:
devices:
clientDevices:
- usb:
name: smartcard
- usb:
name: webcam
```

Let's say the above is vmi-alpine-efi, you can proceed to start
it as you would normaly do (k create -f vmi-alpine-efi.yaml).

Config wise, nothing else is needed.

[4] https://github.com/kubevirt/kubevirt/pull/4089#
[5] https://github.com/kubevirt/kubevirt/pull/4089#issuecomment-725363192


> (1) How do you launch the client on the workstation? Is it a
> CLI invocation? What parameters does it need?

First you need to know which USB device, in your client machine,
you want to redirect. lsusb gives the following info for my
webcam:

Bus 001 Device 003: ID 04f2:b596 Chicony Electronics Co., Ltd Integrated Camera

usbredirect binary can recognize either bus-device (1-3) or
vendor:product (04f2:b596).

Now, you can chose one of the two slots declared earlier, either
"smartcard" or "webcam".

sudo ./cluster-up/virtctl.sh usbredir 04f2:b596 webcam vmi-alpine-efi

> (3) On live migration (initiated by whom?)

There are a few use cases for Live Migration, can be started by
user/admin but even the k8s cluster might have to evict pods [6]
to handle node's resources. With the right configuration, VMI's
might be able to Live Migrate in those scenarios. Some more
information on Live Migration [7]

[6] https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/
[7] https://kubevirt.io/2020/Live-migration.html

> the server goes away at some point (I assume), and you want to
> have the client be able to pivot to the new destination, yes?

Yeah, I want that ;).

As mentioned earlier, we are connected to virt-handler in the
source-node and eventualy it gets disconneted due Live Migration.
We need to connect to the target-node in a way that can be
transparent to the feature (fast, without losing data, etc).

The migration itself is proxied [8] but I believe that
virt-handler in the target-node can do the job of connecting to
the migrated-vm while on Paused state and buffers data while
virtctl reconnects. We can use libvirt hooks [9[ if more
precision is needed... A bit of a challenge here and there. Fun,
nonetheless.

[8] https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Live-Migration.md
[9] https://libvirt.org/hooks.html#qemu_migration

Two other points that I'm postponing to think about later is

(1) security wise: being sure that the reconnection is being done
by the same user which was connected before
(2) unlike spice, we can have multiple clients connecting to the
VM, like user1 connects to "webcam" while user2 connects to
"flashdisk" usb slots.

Thanks for asking, sorry for the long reponse.

Cheers,
Victor
signature.asc
Reply all
Reply to author
Forward
0 new messages