Discussion: Custom Libvirt and QEMU RPM's

8 views
Skip to first unread message

Aidan Wallace

unread,
Mar 30, 2026, 4:11:49 PM (10 days ago) Mar 30
to kubevi...@googlegroups.com
Hi all,

I currently have a PR up for an issue that has sparked some debate, and would like to get a consensus from the community on a path forward. My goal is to streamline the process of building qemu and/or libvirt from source (to RPMs) and consume those in a local kubevirt build. I think this is useful to developers debugging tricky issues and testers performing pre-integration testing. 

Additionally, I think this aligns with the sig-buildsytem charter (i.o.w. contributors shouldn't have the feeling that they are wasting their time). As a new contributor who is neither an expert on RPM packaging or bazel, it was non-trivial to get the built rpm's into kubevirt the first time, which I think would be a shared experience for any newcomer looking to experiment with how kubevirt interacts with the lower portions of the virtualization stack.

My general approach (solution 1) to the problem was to:
  • Create a docker container with qemu & libvirt build dependencies that aligns with the kubevirt builder version
  • Script the standard RPM build instructions for each project (libvirt includes build commands to build to rpm, qemu does not and is maintained by the centos qemu-kvm project)
  • Document how to change the version numbers for each project as a low-effort check to ensure your custom rpms are present in the final build
  • Automatically discover built version numbers to force inclusion in the kubevirt build
The main question is: Is this too entangled in details of other projects to be introduced into kubevirt. An alternative solution (solution 2) was suggested to allow kubevirt builds to pick up locally available rpm packages by version number. It would then be up to the user how to source these packages (downloading a prerelease, build from source, etc.). This reduces the complexity in kubevirt, but adds friction to the user experience.

I can also totally understand if this is not a common enough use case to warrant inclusion, which would bias towards the simpler solution 2. If not many people are "wasting their time" setting up local rpm builds for qemu/libvirt from scratch, it may not be important to include.

Looking for any comments on use cases, or design input.

Thanks,
--
Aidan Wallace

Jed Lejosne

unread,
Mar 30, 2026, 5:21:33 PM (10 days ago) Mar 30
to kubevirt-dev
Hi Aidan,

I understand the need for using custom libvirt/QEMU builds, and I appreciate you bringing this to the list. I am not familiar with the PR you're referring to, could you please add a link to it?
Also, solution 2 sounds a lot like https://github.com/kubevirt/kubevirt/pull/6673, are you familiar with it?

That said, I don't think solution 1 aligns well with KubeVirt's architecture. KubeVirt doesn't build its own Linux distribution from source and instead relies on an existing one. The upside is that we get compatibility, stability, and updates for free. The downside is the reduced flexibility you're pointing to. Solution 1 would create a hybrid where most RPMs come from the distro but some are built by us. Even with just two source-built packages, this could send us into dependency hell and reduce the support we get from upstream projects. It also sets a precedent that could be hard to contain over time.

Solution 2, on the other hand, seems like a great fit. Letting users supply their own RPMs by version keeps the boundary clean while still solving the problem. You could probably reduce the friction significantly by developing a small set of helper scripts (or documentation) that walk through sourcing and injecting custom packages. That way the workflow stays approachable for newcomers without pulling build logic for external projects into KubeVirt itself.

Thanks,
Jed

Harshit Gupta

unread,
Mar 30, 2026, 5:46:08 PM (10 days ago) Mar 30
to kubevirt-dev
Hi Aidan and Jed,

Thanks for raising this topic. I am also in the process of building custom QEMU RPMs with MSHV accelerator support because the ones in the CentOS Stream 9 repo don't have it.
I am following Solution 2, and using the Custom RPMs Setup infrastructure (https://github.com/kubevirt/kubevirt/pull/6673) to build KubeVirt with custom RPMs.

I can 100% relate to the fact that the RPM packaging process took a lot of effort to get right. However, I was able to use a bulk of the QEMU RPM Spec on CentOS's qemu-kvm project. So I agree with Jed that if we document the steps to build RPMs from source, one can use that and the documentation in https://github.com/kubevirt/kubevirt/blob/main/docs/custom-rpms.md to build KubeVirt images with the desired version of diff packages. 

I would be happy to share the scripts/automation code I have written for building the custom QEMU RPM. I think this is a good step in making new users/contributors to KubeVirt be able to do better debugging and testing.

Yours sincerely,
Harshit Gupta

Lee Yarwood

unread,
Mar 31, 2026, 8:30:09 AM (9 days ago) Mar 31
to Harshit Gupta, Aidan Wallace, kubevirt-dev
Thanks Harshit and Aidan,

Comments in line below.

On Mon, 30 Mar 2026 at 22:46, Harshit Gupta <harshitg...@gmail.com> wrote:
>
> Hi Aidan and Jed,
>
> Thanks for raising this topic. I am also in the process of building custom QEMU RPMs with MSHV accelerator support because the ones in the CentOS Stream 9 repo don't have it.
> I am following Solution 2, and using the Custom RPMs Setup infrastructure (https://github.com/kubevirt/kubevirt/pull/6673) to build KubeVirt with custom RPMs.
>
> I can 100% relate to the fact that the RPM packaging process took a lot of effort to get right. However, I was able to use a bulk of the QEMU RPM Spec on CentOS's qemu-kvm project. So I agree with Jed that if we document the steps to build RPMs from source, one can use that and the documentation in https://github.com/kubevirt/kubevirt/blob/main/docs/custom-rpms.md to build KubeVirt images with the desired version of diff packages.
>
> I would be happy to share the scripts/automation code I have written for building the custom QEMU RPM. I think this is a good step in making new users/contributors to KubeVirt be able to do better debugging and testing.

Harshit, I believe virt-preview [1] should provide new enough versions
for testing MSHV in your case. I've had issues consuming packages from
this repo with the currently documented custom rpms flow because
virt-preview uses Fedora epochs lower than CentOS Stream packages.
I've posted the following generated hack script that can pull these in
manually for test builds:

https://github.com/kubevirt/kubevirt/pull/17362

Can you review it and let me know if that addresses your use case?

[1] https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/

> On Monday, March 30, 2026 at 5:21:33 PM UTC-4 Jed Lejosne wrote:
>
> Hi Aidan,
>
> I understand the need for using custom libvirt/QEMU builds, and I appreciate you bringing this to the list. I am not familiar with the PR you're referring to, could you please add a link to it?
> Also, solution 2 sounds a lot like https://github.com/kubevirt/kubevirt/pull/6673, are you familiar with it?
>
> That said, I don't think solution 1 aligns well with KubeVirt's architecture. KubeVirt doesn't build its own Linux distribution from source and instead relies on an existing one. The upside is that we get compatibility, stability, and updates for free. The downside is the reduced flexibility you're pointing to. Solution 1 would create a hybrid where most RPMs come from the distro but some are built by us. Even with just two source-built packages, this could send us into dependency hell and reduce the support we get from upstream projects. It also sets a precedent that could be hard to contain over time.
>
> Solution 2, on the other hand, seems like a great fit. Letting users supply their own RPMs by version keeps the boundary clean while still solving the problem. You could probably reduce the friction significantly by developing a small set of helper scripts (or documentation) that walk through sourcing and injecting custom packages. That way the workflow stays approachable for newcomers without pulling build logic for external projects into KubeVirt itself.
>
> Thanks,
> Jed
>
> On Monday, March 30, 2026 at 4:11:49 PM UTC-4 Aidan Wallace wrote:
>
> Hi all,
>
> I currently have a PR up for an issue that has sparked some debate, and would like to get a consensus from the community on a path forward. My goal is to streamline the process of building qemu and/or libvirt from source (to RPMs) and consume those in a local kubevirt build. I think this is useful to developers debugging tricky issues and testers performing pre-integration testing.
>
> Additionally, I think this aligns with the sig-buildsytem charter (i.o.w. contributors shouldn't have the feeling that they are wasting their time). As a new contributor who is neither an expert on RPM packaging or bazel, it was non-trivial to get the built rpm's into kubevirt the first time, which I think would be a shared experience for any newcomer looking to experiment with how kubevirt interacts with the lower portions of the virtualization stack.

As above would an easier way to pull in virt-preview packages help you
at all here?

> My general approach (solution 1) to the problem was to:
>
> Create a docker container with qemu & libvirt build dependencies that aligns with the kubevirt builder version
> Script the standard RPM build instructions for each project (libvirt includes build commands to build to rpm, qemu does not and is maintained by the centos qemu-kvm project)

As I've said elsewhere I do not think we should provide the above.
QEMU and libvirt are ultimately responsible for this IMHO.

> Document how to change the version numbers for each project as a low-effort check to ensure your custom rpms are present in the final build
> Automatically discover built version numbers to force inclusion in the kubevirt build

As I've suggested above with my virt-preview suggestion I think we can
and should improve this.

> The main question is: Is this too entangled in details of other projects to be introduced into kubevirt. An alternative solution (solution 2) was suggested to allow kubevirt builds to pick up locally available rpm packages by version number. It would then be up to the user how to source these packages (downloading a prerelease, build from source, etc.). This reduces the complexity in kubevirt, but adds friction to the user experience.

I think you mean developer experience, and even then, only a limited
subset of developers want to test custom or unreleased code changes in
QEMU and libvirt that we can't get from virt-preview or other means.

Harshit Gupta

unread,
Mar 31, 2026, 9:32:41 AM (9 days ago) Mar 31
to kubevirt-dev
Thanks Lee. Although using the virt-preview repo is the right approach for installing the more recent versions of the QEMU and Libvirt packages, I don't think that is sufficient presently for the MSHV use-case. This is because of 2 reasons:
  1. If the virt-preview QEMU RPM is being built using the same SPEC as the RPM in CentOS Stream 9, then it is missing the "--enable-mshv" option. To use virt-preview, we would need to publish a new release of the RPM with MSHV enabled.
  2. QEMU v10.2.1 is missing a key commit for enabling VM creation on MSHV backend. That commit is merged in the master branch though. This also necessitates publishing a new release with the missing commit.
So at the moment, we need to build the QEMU RPM with MSHV enabled and all code commits from source. The RPM in virt-preview repo isn't sufficient presently.

Aidan Wallace

unread,
Mar 31, 2026, 9:51:50 AM (9 days ago) Mar 31
to Harshit Gupta, kubevirt-dev
Hi,

Sorry for not originally including the PR link, I think that may have added some confusion to what I was suggesting: https://github.com/kubevirt/kubevirt/pull/16893

On Tue, Mar 31, 2026 at 8:33 AM Harshit Gupta <harshitg...@gmail.com> wrote:
Thanks Lee. Although using the virt-preview repo is the right approach for installing the more recent versions of the QEMU and Libvirt packages, I don't think that is sufficient presently for the MSHV use-case. This is because of 2 reasons:
  1. If the virt-preview QEMU RPM is being built using the same SPEC as the RPM in CentOS Stream 9, then it is missing the "--enable-mshv" option. To use virt-preview, we would need to publish a new release of the RPM with MSHV enabled.
  2. QEMU v10.2.1 is missing a key commit for enabling VM creation on MSHV backend. That commit is merged in the master branch though. This also necessitates publishing a new release with the missing commit.
So at the moment, we need to build the QEMU RPM with MSHV enabled and all code commits from source. The RPM in virt-preview repo isn't sufficient presently.

On Tuesday, March 31, 2026 at 8:30:09 AM UTC-4 Lee Yarwood wrote:
Thanks Harshit and Aidan,

Comments in line below.

On Mon, 30 Mar 2026 at 22:46, Harshit Gupta <harshitg...@gmail.com> wrote:
>
> Hi Aidan and Jed,
>
> Thanks for raising this topic. I am also in the process of building custom QEMU RPMs with MSHV accelerator support because the ones in the CentOS Stream 9 repo don't have it.
> I am following Solution 2, and using the Custom RPMs Setup infrastructure (https://github.com/kubevirt/kubevirt/pull/6673) to build KubeVirt with custom RPMs.
>
> I can 100% relate to the fact that the RPM packaging process took a lot of effort to get right. However, I was able to use a bulk of the QEMU RPM Spec on CentOS's qemu-kvm project. So I agree with Jed that if we document the steps to build RPMs from source, one can use that and the documentation in https://github.com/kubevirt/kubevirt/blob/main/docs/custom-rpms.md to build KubeVirt images with the desired version of diff packages.
>
> I would be happy to share the scripts/automation code I have written for building the custom QEMU RPM. I think this is a good step in making new users/contributors to KubeVirt be able to do better debugging and testing.


Harshit, I think this is actually exactly what I did as well, our scripts are probably very similar. I think it would be good to merge them somewhere regardless of it is in kubevirt proper or not.
 
Harshit, I believe virt-preview [1] should provide new enough versions
for testing MSHV in your case. I've had issues consuming packages from
this repo with the currently documented custom rpms flow because
virt-preview uses Fedora epochs lower than CentOS Stream packages.
I've posted the following generated hack script that can pull these in
manually for test builds:

https://github.com/kubevirt/kubevirt/pull/17362

Can you review it and let me know if that addresses your use case?

[1] https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/

> On Monday, March 30, 2026 at 5:21:33 PM UTC-4 Jed Lejosne wrote:
>
> Hi Aidan,
>
> I understand the need for using custom libvirt/QEMU builds, and I appreciate you bringing this to the list. I am not familiar with the PR you're referring to, could you please add a link to it?  
> Also, solution 2 sounds a lot like https://github.com/kubevirt/kubevirt/pull/6673, are you familiar with it?
 
 Yes! This was the starting point for my work. Unfortunately that document only covered libvirt (the easier one to build) not qemu. In my PR I expanded the document to include qemu documentation, but I also attempted to script the build process described in that document. IMO scripting the documented process is an improvement, it increases work for regular contributors in the form of maintainence, but decreases the effort for a newcomer. Plus, as evidenced by this thread, I think Harshit and I have already duplicated work on this, which is a good reason to upstream a complete solution.
 

>
> That said, I don't think solution 1 aligns well with KubeVirt's architecture. KubeVirt doesn't build its own Linux distribution from source and instead relies on an existing one. The upside is that we get compatibility, stability, and updates for free. The downside is the reduced flexibility you're pointing to. Solution 1 would create a hybrid where most RPMs come from the distro but some are built by us. Even with just two source-built packages, this could send us into dependency hell and reduce the support we get from upstream projects. It also sets a precedent that could be hard to contain over time.
>
> Solution 2, on the other hand, seems like a great fit. Letting users supply their own RPMs by version keeps the boundary clean while still solving the problem. You could probably reduce the friction significantly by developing a small set of helper scripts (or documentation) that walk through sourcing and injecting custom packages. That way the workflow stays approachable for newcomers without pulling build logic for external projects into KubeVirt itself.

I didn't mean to suggest this would replace the process for building a release of kubevirt, my approach was intended exactly to be the "small set of helper scripts (or documentation)" to improve the workflow. I have no desire to require source builds in CI or regular use case, but to provide them for niche use cases.
 
>
> Thanks,
> Jed
>
> On Monday, March 30, 2026 at 4:11:49 PM UTC-4 Aidan Wallace wrote:
>
> Hi all,
>
> I currently have a PR up for an issue that has sparked some debate, and would like to get a consensus from the community on a path forward. My goal is to streamline the process of building qemu and/or libvirt from source (to RPMs) and consume those in a local kubevirt build. I think this is useful to developers debugging tricky issues and testers performing pre-integration testing.
>
> Additionally, I think this aligns with the sig-buildsytem charter (i.o.w. contributors shouldn't have the feeling that they are wasting their time). As a new contributor who is neither an expert on RPM packaging or bazel, it was non-trivial to get the built rpm's into kubevirt the first time, which I think would be a shared experience for any newcomer looking to experiment with how kubevirt interacts with the lower portions of the virtualization stack.

As above would an easier way to pull in virt-preview packages help you
at all here?

> My general approach (solution 1) to the problem was to:
>
> Create a docker container with qemu & libvirt build dependencies that aligns with the kubevirt builder version
> Script the standard RPM build instructions for each project (libvirt includes build commands to build to rpm, qemu does not and is maintained by the centos qemu-kvm project)

As I've said elsewhere I do not think we should provide the above.
QEMU and libvirt are ultimately responsible for this IMHO.

> Document how to change the version numbers for each project as a low-effort check to ensure your custom rpms are present in the final build
> Automatically discover built version numbers to force inclusion in the kubevirt build

As I've suggested above with my virt-preview suggestion I think we can
and should improve this.

> The main question is: Is this too entangled in details of other projects to be introduced into kubevirt. An alternative solution (solution 2) was suggested to allow kubevirt builds to pick up locally available rpm packages by version number. It would then be up to the user how to source these packages (downloading a prerelease, build from source, etc.). This reduces the complexity in kubevirt, but adds friction to the user experience.

I think you mean developer experience, and even then, only a limited
subset of developers want to test custom or unreleased code changes in
QEMU and libvirt that we can't get from virt-preview or other means.

Yes developer experience. I agree that this is a very limited use case, but I think we already have duplication of effort from Harshit and I.
 
> I can also totally understand if this is not a common enough use case to warrant inclusion, which would bias towards the simpler solution 2. If not many people are "wasting their time" setting up local rpm builds for qemu/libvirt from scratch, it may not be important to include.
>
> Looking for any comments on use cases, or design input.
>
> Thanks,
> --
> Aidan Wallace

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/kubevirt-dev/5f3737e4-ccf1-4ea5-8a1e-14e63609b23fn%40googlegroups.com.


--
Aidan Wallace
Senior Software Engineer
Core Platforms
Reply all
Reply to author
Forward
0 new messages