Supporting 'Launch Security' in Kubevirt

338 views
Skip to first unread message

Vasiliy Ulyanov

unread,
Mar 11, 2021, 2:13:47 AM3/11/21
to kubevirt-dev
Hello everyone.

Recently I've been looking into the topic of 'Launch Security' [1] in libvirt and qemu. It is a feature that is backed by the AMD Secure Encrypted Virtualization (SEV) extension [2]. It allows running encrypted VMs under KVM on the hosts supporting SEV. The encryption of the guest RAM is done on the fly by the dedicated cryptographic hardware.

AFAIK the SEV extension support has been added to all the components of the KVM stack: libvirt >= 4.5.0 (>=5.1.0 recommended), QEMU >= 2.12.0, Linux kernel >= 4.16 (for the host and guest). I think it might be a usefull feature to introduce in Kubevirt as well. Roughly the steps will include the introduction of the new VMI spec API (+conversion to domxml, +validation) and /dev/sev sharing with the virt-launcher pod (similar to /dev/kvm case I suppose). Additionally there are some prerequisites/restrictions which surely need to be checked/validated [3]:
- SEV support on the node (also in the guest but that is probably out of scope)
- Q35 machine type
- OVMF (UEFI)
- locked VM memory to prevent swapping (alt.: use hugepages)
- iommu=on for all virtio devices
- no migration, pause/resume, PCI passthrough

There seem to be no major blockers from the implementation perspective. So I would like to get some initial feedback from the community on whether it is something that may fit in Kubevirt. Also any comments about possible issues which I haven't considered or suggestions are welcome. Any thoughts on that?


Thanks,
Vasiliy

fdeu...@redhat.com

unread,
Mar 16, 2021, 4:28:04 AM3/16/21
to kubevirt-dev
Vasily,

the topic seems to be well suited for KubeVirt. I'm not saying it's easy, but it's a new yet common virtualization feature to encrypt memory.
Because it's a quite complex feature, it would be appreciate if you can start with an overall design to outline your plans and give potential reviewers a better context when reviewing your work in future.

Can you come up with a design? Would you want to brainstorm it in a community call? Thoughts?

Greetings
- fabian
Message has been deleted

James Cadden

unread,
Mar 22, 2021, 4:57:57 PM3/22/21
to kubevirt-dev
Hi Vasily,
I have a basic prototype of Launch Security support in KubeVirt pushed a branch here: https://github.com/jmcadden/kubevirt-sev/tree/sev

To share `/dev/sev` with the virt-launcher pod, I've modified virt-controller to add the `--privileged` flag to the virt-launcher container (which is clear violation of the trust model). I suspect this can be better handled using a Kubernetes Device Plugin.

Best,
Jim

Vasiliy Ulyanov

unread,
Mar 23, 2021, 9:27:49 AM3/23/21
to James Cadden, kubevirt-dev
Hi Jim,

Great, looks like a good start :) If there is something I can help with just let me know. I would be glad to contribute to the feature (by any mean: patches, reviews, etc.). Regarding --privileged, I think you are right, it can be avoided. Have you already looked at how e.g. /dev/kvm is exposed to virt-launcher? There is a device-manager module [1] which actually handles device plugins. Likely /dev/sev can also be exposed in the same way. Apart from that there is a need to setup the permissions and ownership of the device file. That probably can be done similarly to [2].


Thanks,
Vasiliy

пн, 22 мар. 2021 г. в 21:57, James Cadden <jcadd...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/b0334899-9a5c-4bfe-9062-b3aca2133c7fn%40googlegroups.com.

Jed Lejosne

unread,
Mar 23, 2021, 10:45:49 AM3/23/21
to Vasiliy Ulyanov, James Cadden, kubevirt-dev
Just a thought, but if /dev/sev is going to be exposed to every virt-launcher pod through a device plugin, we should ensure it's safe to do so.
We want to make sure that doesn't enable cross-VM access.
For example, we should probably ensure guest policies do not enable debugging, so that KVM_SEV_DBG_DECRYPT is disabled...
In general, sharing sensitive dev nodes with untrusted containers should only be done after careful review of the scope of every supported ioctl.

Jed

Vasiliy Ulyanov

unread,
Mar 24, 2021, 9:11:43 AM3/24/21
to Jed Lejosne, James Cadden, kubevirt-dev
Agree, valid point. Ioctls review is definitely needed. Also I think it is possible to do some additional validation of the guest policies and prevent from setting the ones which may potentially be dangerous.

Regarding KVM_SEV_DBG_DECRYPT, Is it enough just to have access to the dev node /dev/sev in order to access the guest memory? I would assume that the ioctl can be applied only to the specific FD the VM is associated with (and that one is not shared between virt-launcher instances).

Thanks,
Vasiliy

вт, 23 мар. 2021 г. в 15:45, Jed Lejosne <jlej...@redhat.com>:

Nathaniel McCallum

unread,
Mar 29, 2021, 12:08:42 PM3/29/21
to kubevirt-dev
On Tuesday, March 23, 2021 at 10:45:49 AM UTC-4 Jed Lejosne wrote:
Just a thought, but if /dev/sev is going to be exposed to every virt-launcher pod through a device plugin, we should ensure it's safe to do so.
We want to make sure that doesn't enable cross-VM access.
For example, we should probably ensure guest policies do not enable debugging, so that KVM_SEV_DBG_DECRYPT is disabled...
In general, sharing sensitive dev nodes with untrusted containers should only be done after careful review of the scope of every supported ioctl.

As someone directly involved in this effort upstream, let me offer the following:

1. Ensure debugging is disabled before sharing the device node.
2. Ensure that read-only access is given to the device node.

/dev/sev works on permissions. Write access allows you to manage system-wide certificate state. Read access allows you to fetch the certificates and launch a guest.

One other problem remains: certificate chain caching.

In order to complete an attestation, (1) the lower half of the certificate chain must be fetched from the firmware and (2) the upper half must be fetched from the AMD certificate service. The two halves are then distributed to the tenant as part of attestation. (1) is slow and (2) is both slow and strictly rate limited. Therefore, it is likely that you will also want the host to pre-assemble the certificate chain and mount it into the container as read-only. sevctl[0] can manage the certificate caching for you.

[0]: https://github.com/enarx/sevctl

Vasiliy Ulyanov

unread,
Jul 30, 2021, 3:17:24 AM7/30/21
to kubevirt-dev
Hello here!

I would like to get back to this topic of guest memory encryption using AMD SEV. As far as I can see there has been no activity around it recently. I still think this is quite an interesting and usefull feature for KubeVirt. Therefore I've created a pull request [1] that adds the basic SEV support. I tested it on my local setup and now would like to get feedback from the community. Lets discuss all the concerns in the context of this PR and address them one by one. I hope that will help moving this topic forward. So please take a look :)

BTW, There is a similar technology from Intel called TDX (Trust Domain Extensions). Probably it would be interesting to enable it in KubeVirt as well as soon as it lands in qemu and libvirt.

Thanks.

понедельник, 29 марта 2021 г. в 18:08:42 UTC+2, Nathaniel McCallum:

Vladik Romanovsky

unread,
Aug 16, 2021, 9:36:04 AM8/16/21
to Vasiliy Ulyanov, kubevirt-dev
Hi Vasiliy,

Thank you for this PR.
I've made some comments on the PR.  However, what I am mainly missing is how are we going to address the attestation.
Does it make sense to enable SEV right now without addressing it? I wonder what is the benefit it to the users in this case?
Perhaps you could address the comments made by Nathaniel McCallum on this thread?

My main concern is that we will need now to commit to an API, while we don't fully understand how is it going to change in the future.

Thanks,
Vladik

Vasiliy Ulyanov

unread,
Aug 27, 2021, 8:35:11 AM8/27/21
to Vladik Romanovsky, kubevirt-dev
Hi Vladik,

Sorry for being inactive here. I was on leave for some time. I think your cencerns are valid. I will probably need to elaborate more on the topic and extend the PR. Will work on that further. Thank you for your feedback and review. I will surely address the comments in github.

Thanks,
Vasiliy


пн, 16 авг. 2021 г. в 15:36, Vladik Romanovsky <vrom...@redhat.com>:

Vasiliy Ulyanov

unread,
Oct 14, 2021, 4:00:03 AM10/14/21
to kubevirt-dev, Vladik Romanovsky
Hey, just a small update here. In addition to the PR [1], I also sketched an initial doc [2] about the topic. Might be helpful for a higher-level discussion.

Any feedback is welcome :)

пт, 27 авг. 2021 г. в 14:34, Vasiliy Ulyanov <vasil...@gmail.com>:

Roman Mohr

unread,
Oct 14, 2021, 4:40:31 AM10/14/21
to Vasiliy Ulyanov, kubevirt-dev, Vladik Romanovsky
On Thu, Oct 14, 2021 at 10:00 AM Vasiliy Ulyanov <vasil...@gmail.com> wrote:
Hey, just a small update here. In addition to the PR [1], I also sketched an initial doc [2] about the topic. Might be helpful for a higher-level discussion.

Any feedback is welcome :)


Went over the proposal and left a few comments. :)

Best regards,
Roman
 

Vasiliy Ulyanov

unread,
Mar 3, 2022, 2:08:21 AM3/3/22
to kubevirt-dev, Vladik Romanovsky, Roman Mohr
Hey, hello here!

With regards to SEV attestation there is currently an interesting proposal in the libvirt mailing list [1][2]. It aims to automate the attestation process so that KubeVirt and similar apps can run SEV VMs as usual guests without the need to do all that 'dancing' with certificates fetching, launch blob preparation, secret injections, etc. That will be handled by libvirt via talking to an 'attestation service' that just needs to implement a specific protocol (current proposal is using REST).

From KubeVirt's perspective that means it will not need to expose all the libvirt SEV APIs and introduce new virtctl commands as was initially proposed in [3]. In this case KubeVirt can simply provide an implementation of the attestation service (maybe as part of virtctl functionality?) and libvirt will talk to that internally. This shall simplify the implementation.

IMHO the proposed approach looks like a good way forward. I just wanted to highlight it and bring it to the attention of the community. Maybe someone has any thoughts on that or concerns?

Thanks.


чт, 14 окт. 2021 г. в 10:40, Roman Mohr <rm...@redhat.com>:

Vasiliy Ulyanov

unread,
Apr 7, 2022, 6:15:38 AM4/7/22
to kubevirt-dev
Hello community,

Recently the PR [1] updating libvirt to 8.0.0 has been merged. Apart from many other improvements, the new version introduces the API required to perform pre-attestation of SEV guests. Some time ago I created a WIP PoC adding the relevant bits to KubeVirt (following the initial SEV design proposal [2]). Now I rebased the changes and the code works fine with the latest master.

I would like to highlight this PR [3] here and bring it to the attention of the community. It introduces new KubeVirt API endpoints specific to SEV pre-attestation. There are currently discussions going on (in the libvirt mailing list and here as well) about common protocol for an attestation service. The API potentially may serve as a baseline for implementing an attestation controller in KubeVirt. If anyone is interested in the topic please take a look at the PR. Any feedback and review would be appreciated.

Thanks.



чт, 3 мар. 2022 г. в 08:08, Vasiliy Ulyanov <vasil...@gmail.com>:
Reply all
Reply to author
Forward
0 new messages