Hey,
(grab a coffee)
Sure! I'll reply in a different order than you have asked but let
me know if something is not clear.
> You've got a client on a user's workstation that connects to ... a web
> socket hosted by ... which process, in which context?
First, the overall architecture is that KubeVirt has a CLI tool
called virtctl. You can use it as is (./virtctl) or as Krew
plugin. The basic information is here [0].
Virtctl can be used for a few features like connecting to the
remote VM with VNC. It makes it fairly easy as you only need to
know the VM's name, e.g: virtctl vnc $vmi_name
( Note that virtctl does not implement a VNC client, instead it
will launch one like remote-viewer or tigervnc (if you have it
in your $PATH )
The websocket part starts with a simple HTTP/GET first at the
subresource (vnc) of the given VM (vmi-alpine-efi)
/apis/
subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/vmi-alpine-efi/vnc
This ends up being handled by virt-handler component, which runs
in the node that our VM is running. virt-handler is the one that
connects to the QEMU's socket for that given subresource [1] (for
vnc, in that Pod: /var/run/kubevirt-private/<unique-hash>/virt-vnc)
virt-handler is the one to confirm/proceed with the upgrade to
websocket.
I'm not 100% sure if there are more components proxying data
around, I'm positive that the request is being routed by virt-api
[2] so in short:
QEMU <-> .../virt-vnc <-> virt-handler <-> virt-api <-> virtctl
<-> remote-viewer
[0]
https://kubevirt.io/user-guide/operations/virtctl_client_tool/
[1]
https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-handler/rest/console.go#L61
[2]
https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-api/api.go#L263
I thinks this more or less reply to your first question. I'm
using VNC here as this is already in stable branch. The USB
redirection features is quite similar, instead of launching
remote-viewer we launch the usbredirect binary [3] (that should
be part of usbredir 0.9.0 [next] release)
[3]
https://gitlab.freedesktop.org/spice/usbredir/-/merge_requests/2
> (2) How does VM configuration work? In what order do various
> host and client processes need to launch for it to work?
Based on the last iteration of the WIP PR [4], the VMI yaml file
needs to declare the USB slots and each slot should be associated
with a name (suggested by Fabian [5]) so, under devices you can
add up to 4 slots like so:
```yaml
spec:
domain:
devices:
clientDevices:
- usb:
name: smartcard
- usb:
name: webcam
```
Let's say the above is vmi-alpine-efi, you can proceed to start
it as you would normaly do (k create -f vmi-alpine-efi.yaml).
Config wise, nothing else is needed.
[4]
https://github.com/kubevirt/kubevirt/pull/4089#
[5]
https://github.com/kubevirt/kubevirt/pull/4089#issuecomment-725363192
> (1) How do you launch the client on the workstation? Is it a
> CLI invocation? What parameters does it need?
First you need to know which USB device, in your client machine,
you want to redirect. lsusb gives the following info for my
webcam:
Bus 001 Device 003: ID 04f2:b596 Chicony Electronics Co., Ltd Integrated Camera
usbredirect binary can recognize either bus-device (1-3) or
vendor:product (04f2:b596).
Now, you can chose one of the two slots declared earlier, either
"smartcard" or "webcam".
sudo ./cluster-up/virtctl.sh usbredir 04f2:b596 webcam vmi-alpine-efi
> (3) On live migration (initiated by whom?)
There are a few use cases for Live Migration, can be started by
user/admin but even the k8s cluster might have to evict pods [6]
to handle node's resources. With the right configuration, VMI's
might be able to Live Migrate in those scenarios. Some more
information on Live Migration [7]
[6]
https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/
[7]
https://kubevirt.io/2020/Live-migration.html
> the server goes away at some point (I assume), and you want to
> have the client be able to pivot to the new destination, yes?
Yeah, I want that ;).
As mentioned earlier, we are connected to virt-handler in the
source-node and eventualy it gets disconneted due Live Migration.
We need to connect to the target-node in a way that can be
transparent to the feature (fast, without losing data, etc).
The migration itself is proxied [8] but I believe that
virt-handler in the target-node can do the job of connecting to
the migrated-vm while on Paused state and buffers data while
virtctl reconnects. We can use libvirt hooks [9[ if more
precision is needed... A bit of a challenge here and there. Fun,
nonetheless.
[8]
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Live-Migration.md
[9]
https://libvirt.org/hooks.html#qemu_migration
Two other points that I'm postponing to think about later is
(1) security wise: being sure that the reconnection is being done
by the same user which was connected before
(2) unlike spice, we can have multiple clients connecting to the
VM, like user1 connects to "webcam" while user2 connects to
"flashdisk" usb slots.
Thanks for asking, sorry for the long reponse.
Cheers,
Victor