[kubevirt-dev] Proposal how to integrate libguestfs tools in Kubevirt

240 views
Skip to first unread message

Alice Frosi

unread,
Apr 7, 2021, 11:04:12 AM4/7/21
to kubevirt-dev
Hi Kubevirt community,

I'm working on a PR that introduces libguestfs tools [1] in Kubevirt. The PR is split in 2 parts:
- the first part introduces a new container image with ibguestfs tools
- the second part adds a new command to virtctl that basically uses the container image with libguestfs to access disk images on PVCs.

Building the libguestfs container image is slightly more complicated with bazel than docker because the installation postscripts are not run. In this case, the kernel, initrd and root are not generated during the installation process as using standard dnf tool.
A possible workaround is to build the appliance in a previous step with libguestfs-make-fixed-appliance [2], and copy the generated files inside the final image with bazel. The approach is a mix and match with docker (podman) and bazel.
An alternative could be to do the entire build using docker that run a dnf tool inside the build container and correctly create the appliance, and maybe keep the setup in a separate (new) repository. However, this approch doesn't use bazel at all and divarge how the other images are built.

I'm not sure if the approach is acceptable for Kubevirt community, and I'd like to ask if you have any hints on this. Of course, any other suggestions are also welcome :)


Many thanks,

Alice

Alice Frosi

unread,
Apr 14, 2021, 2:36:17 AM4/14/21
to kubevirt-dev, Fabian Deutsch, Roman Mohr
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

Please let me know what you think :)

[4] https://download.libguestfs.org/binaries/appliance
[5] https://github.com/alicefr/kubevirt/blob/libguestfs-integration-v2/hack/libguestfs/Dockerfile

Many thanks,

Alice

Alexander Wels

unread,
Apr 14, 2021, 7:39:07 AM4/14/21
to Alice Frosi, kubevirt-dev, Fabian Deutsch, Roman Mohr
On Wed, Apr 14, 2021 at 2:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

Please let me know what you think :)

[4] https://download.libguestfs.org/binaries/appliance
[5] https://github.com/alicefr/kubevirt/blob/libguestfs-integration-v2/hack/libguestfs/Dockerfile

Many thanks,

Alice

Very nice, having it be a separate container will be helpful for CDI as well for introducing post transfer jobs (I just made the name up) that do things people have been asking us for (guest partition resize, making file systems on blank disks, etc). This way CDI will not end up with a dependency on KubeVirt as KubeVirt already has a dependency on CDI.
 

On Wed, Apr 7, 2021 at 5:03 PM Alice Frosi <afr...@redhat.com> wrote:
Hi Kubevirt community,

I'm working on a PR that introduces libguestfs tools [1] in Kubevirt. The PR is split in 2 parts:
- the first part introduces a new container image with ibguestfs tools
- the second part adds a new command to virtctl that basically uses the container image with libguestfs to access disk images on PVCs.

Building the libguestfs container image is slightly more complicated with bazel than docker because the installation postscripts are not run. In this case, the kernel, initrd and root are not generated during the installation process as using standard dnf tool.
A possible workaround is to build the appliance in a previous step with libguestfs-make-fixed-appliance [2], and copy the generated files inside the final image with bazel. The approach is a mix and match with docker (podman) and bazel.
An alternative could be to do the entire build using docker that run a dnf tool inside the build container and correctly create the appliance, and maybe keep the setup in a separate (new) repository. However, this approch doesn't use bazel at all and divarge how the other images are built.

I'm not sure if the approach is acceptable for Kubevirt community, and I'd like to ask if you have any hints on this. Of course, any other suggestions are also welcome :)


Many thanks,

Alice

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/CABBoX7Np8r02o%3DJnLZ3vXDJFy-H8DJH9z8Em0GEbSX161hgpdw%40mail.gmail.com.

Fabian Deutsch

unread,
Apr 14, 2021, 7:59:12 AM4/14/21
to Alice Frosi, kubevirt-dev, Roman Mohr
On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?

Alice Frosi

unread,
Apr 14, 2021, 8:39:10 AM4/14/21
to Fabian Deutsch, kubevirt-dev, Roman Mohr
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 

Alice Frosi

unread,
Apr 14, 2021, 8:53:31 AM4/14/21
to Alexander Wels, kubevirt-dev, Fabian Deutsch, Roman Mohr
On Wed, Apr 14, 2021 at 1:39 PM Alexander Wels <aw...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 2:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

Please let me know what you think :)

[4] https://download.libguestfs.org/binaries/appliance
[5] https://github.com/alicefr/kubevirt/blob/libguestfs-integration-v2/hack/libguestfs/Dockerfile

Many thanks,

Alice

Very nice, having it be a separate container will be helpful for CDI as well for introducing post transfer jobs (I just made the name up) that do things people have been asking us for (guest partition resize, making file systems on blank disks, etc). This way CDI will not end up with a dependency on KubeVirt as KubeVirt already has a dependency on CDI.
 
True, this is also a requirement if we want to build more complex disk pipelines in CDI (during the image streaming).  Some example of disk pipelines in Richard Jones's presentation at: http://git.annexia.org/?p=libguestfs-talks.git;a=blob_plain;f=2021-pipelines/notes.txt;hb=HEAD

dvo...@redhat.com

unread,
Apr 14, 2021, 11:12:01 AM4/14/21
to kubevirt-dev
On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

Roman Mohr

unread,
Apr 15, 2021, 10:26:34 AM4/15/21
to dvo...@redhat.com, kubevirt-dev
On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Best regards,
Roman
 

 
 

Please let me know what you think :)

[4] https://download.libguestfs.org/binaries/appliance
[5] https://github.com/alicefr/kubevirt/blob/libguestfs-integration-v2/hack/libguestfs/Dockerfile

Many thanks,

Alice

On Wed, Apr 7, 2021 at 5:03 PM Alice Frosi <afr...@redhat.com> wrote:
Hi Kubevirt community,

I'm working on a PR that introduces libguestfs tools [1] in Kubevirt. The PR is split in 2 parts:
- the first part introduces a new container image with ibguestfs tools
- the second part adds a new command to virtctl that basically uses the container image with libguestfs to access disk images on PVCs.

Building the libguestfs container image is slightly more complicated with bazel than docker because the installation postscripts are not run. In this case, the kernel, initrd and root are not generated during the installation process as using standard dnf tool.
A possible workaround is to build the appliance in a previous step with libguestfs-make-fixed-appliance [2], and copy the generated files inside the final image with bazel. The approach is a mix and match with docker (podman) and bazel.
An alternative could be to do the entire build using docker that run a dnf tool inside the build container and correctly create the appliance, and maybe keep the setup in a separate (new) repository. However, this approch doesn't use bazel at all and divarge how the other images are built.

I'm not sure if the approach is acceptable for Kubevirt community, and I'd like to ask if you have any hints on this. Of course, any other suggestions are also welcome :)


Many thanks,

Alice

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.

Alice Frosi

unread,
Apr 16, 2021, 2:18:26 AM4/16/21
to Roman Mohr, dvo...@redhat.com, kubevirt-dev, Adam Litke
On Thu, Apr 15, 2021 at 4:26 PM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Yes, this job should be triggered only when there is a new version of libguestfs. 
(I'm putting Adam also on cc)

Alice 

Roman Mohr

unread,
Apr 19, 2021, 4:46:33 AM4/19/21
to Alice Frosi, dvo...@redhat.com, kubevirt-dev, Adam Litke
On Fri, Apr 16, 2021 at 8:18 AM Alice Frosi <afr...@redhat.com> wrote:


On Thu, Apr 15, 2021 at 4:26 PM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Yes, this job should be triggered only when there is a new version of libguestfs. 
(I'm putting Adam also on cc)

How relevant is the kernel which we are getting from supermin?

Best regards,
Roman

Alice Frosi

unread,
Apr 19, 2021, 5:15:09 AM4/19/21
to Roman Mohr, dvo...@redhat.com, kubevirt-dev, Adam Litke
On Mon, Apr 19, 2021 at 10:46 AM Roman Mohr <rm...@redhat.com> wrote:


On Fri, Apr 16, 2021 at 8:18 AM Alice Frosi <afr...@redhat.com> wrote:


On Thu, Apr 15, 2021 at 4:26 PM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Yes, this job should be triggered only when there is a new version of libguestfs. 
(I'm putting Adam also on cc)

How relevant is the kernel which we are getting from supermin?
The kernel is not from supermin, but it is from the kernel-core package

Alice

Roman Mohr

unread,
Apr 19, 2021, 5:19:12 AM4/19/21
to Alice Frosi, dvo...@redhat.com, kubevirt-dev, Adam Litke
On Mon, Apr 19, 2021 at 11:15 AM Alice Frosi <afr...@redhat.com> wrote:

On Mon, Apr 19, 2021 at 10:46 AM Roman Mohr <rm...@redhat.com> wrote:


On Fri, Apr 16, 2021 at 8:18 AM Alice Frosi <afr...@redhat.com> wrote:


On Thu, Apr 15, 2021 at 4:26 PM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Yes, this job should be triggered only when there is a new version of libguestfs. 
(I'm putting Adam also on cc)

How relevant is the kernel which we are getting from supermin?
The kernel is not from supermin, but it is from the kernel-core package

Yes, that is what I meant. If we only create an appliance when libguestfs is updated, we may create the appliance with supermin for e.g. the whole fc33 release cycle only once, while kernel-core gets updated extremely often.

Alice Frosi

unread,
Apr 19, 2021, 5:50:58 AM4/19/21
to Roman Mohr, dvo...@redhat.com, kubevirt-dev, Adam Litke
On Mon, Apr 19, 2021 at 11:19 AM Roman Mohr <rm...@redhat.com> wrote:


On Mon, Apr 19, 2021 at 11:15 AM Alice Frosi <afr...@redhat.com> wrote:

On Mon, Apr 19, 2021 at 10:46 AM Roman Mohr <rm...@redhat.com> wrote:


On Fri, Apr 16, 2021 at 8:18 AM Alice Frosi <afr...@redhat.com> wrote:


On Thu, Apr 15, 2021 at 4:26 PM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 5:12 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Wednesday, April 14, 2021 at 8:39:10 AM UTC-4 Alice Frosi wrote:
On Wed, Apr 14, 2021 at 1:59 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Wed, Apr 14, 2021 at 8:36 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

after some more investigation and help (thanks Roman ;) ), I found out that Kubevirt already uses libguestfs-tools in kubevirt-tekton-tasks [1]. The base image for virt-sysprep [2] and virt-customize [3] is based on a public libguestfs appliance [4]. However, those appliances are a bit old. It is quite straightforward to build the appliance, and it can be built in a container image in this way [5]. IMHO this can be the starting point for harmonizing the effort in [6] with the tekton task. My suggestion is to have a separate repo that contains a Dockerfile that builds the libguestfs appliance and the setup to release it. The release can be in a form of a tarball or a container image. Afterwards, we could start using it in the kubevirt-tekton-task and in [6].

If we consider the virtctl guestfs integration then IMO it shoul dbe part of the kubevirt/kubevirt repo - speak yet another container which we would push to quay.

What would your reasoning be to move it into a separate github repo?
Hi Fabian, 
for me it can be part of kubevirt repo. It is a bit separated from the kubevirt core functionality, and it uses dockerfiles instead of bazel. For me both options make sense. 


Here's my thought process.

Let's put the logic to ship this appliance's container in the repo that it is most closely related to. Since this involves introspection of a disk imported into a PVC, I think the best location is the cdi repo. This also naturally fits our consumption model where kubevirt consumes disk tooling from cdi.

If we go down the route of putting this appliance in its own dedicated repo, the issue with that is primarily with releases.  Who is responsible for making the releases? If we can put building/releasing this container into an already established release flow, then we won't have to even think about whether updates are occurring because it will be in a repo that already has a recurring release cadence. 

For some operations CDI may be the right place, for some not. I think independent of the location, one issue is that the appliance is created in post-install steps of the RPM installation, and that the pre-published appliances are outdated. I could also think about a periodic job which publishes the appliances like in [4] or as container. Depending kubevirt projects can then reference in their releases pre-published appliances from that artifact.

Yes, this job should be triggered only when there is a new version of libguestfs. 
(I'm putting Adam also on cc)

How relevant is the kernel which we are getting from supermin?
The kernel is not from supermin, but it is from the kernel-core package

Yes, that is what I meant. If we only create an appliance when libguestfs is updated, we may create the appliance with supermin for e.g. the whole fc33 release cycle only once, while kernel-core gets updated extremely often.
 
I think the appliance is pretty independent of libguestfs as it is used to start qemu. So, I hope that if we stick to one kernel version this should be enough. Right now,  the tekton task repo is using a pretty old version of the appliance. IMHO, the appliance should be rebuilt in the fedora cycle if there is a bug in the kernel. Does it sound reasonable to you?

Alice Frosi

unread,
Apr 23, 2021, 7:51:41 AM4/23/21
to Roman Mohr, dvo...@redhat.com, kubevirt-dev, Adam Litke
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

Many thanks,

Alice

Adam Litke

unread,
Apr 23, 2021, 1:54:50 PM4/23/21
to Alice Frosi, Roman Mohr, dvo...@redhat.com, kubevirt-dev
On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts


--

Adam Litke

He / Him / His

Associate Manager - OpenShift Virtualization Storage

ali...@redhat.com   

David Vossel

unread,
Apr 23, 2021, 2:50:47 PM4/23/21
to Adam Litke, Alice Frosi, Roman Mohr, kubevirt-dev
On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me

Alice Frosi

unread,
Apr 26, 2021, 2:54:48 AM4/26/21
to David Vossel, Adam Litke, Roman Mohr, kubevirt-dev
On Fri, Apr 23, 2021 at 8:50 PM David Vossel <dvo...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me
 
If this reaches some consensus, could you please help me to request a new repository in the Kubevirt organization :).

Many thanks,

Alice 

Fabian Deutsch

unread,
Apr 26, 2021, 4:15:02 AM4/26/21
to Alice Frosi, Daniel Hiller, David Vossel, Adam Litke, Roman Mohr, kubevirt-dev
On Mon, Apr 26, 2021 at 8:54 AM Alice Frosi <afr...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 8:50 PM David Vossel <dvo...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me
 
If this reaches some consensus, could you please help me to request a new repository in the Kubevirt organization :).

What would the repository name be?

+Daniel Hiller can help to get it created
 

Alice Frosi

unread,
Apr 26, 2021, 4:42:50 AM4/26/21
to Fabian Deutsch, Daniel Hiller, David Vossel, Adam Litke, Roman Mohr, kubevirt-dev
On Mon, Apr 26, 2021 at 10:15 AM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 8:54 AM Alice Frosi <afr...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 8:50 PM David Vossel <dvo...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me
 
If this reaches some consensus, could you please help me to request a new repository in the Kubevirt organization :).

What would the repository name be?
Not having a strong imagination, I'd suggest libguestfs-appliance :)

I also need to know in which form we want to publish the appliance. In my opinion, either a tarball or a container image. What would you prefer?

Many thanks,

Alice

Roman Mohr

unread,
Apr 26, 2021, 4:45:00 AM4/26/21
to Alice Frosi, Fabian Deutsch, Daniel Hiller, David Vossel, Adam Litke, kubevirt-dev
On Mon, Apr 26, 2021 at 10:42 AM Alice Frosi <afr...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 10:15 AM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 8:54 AM Alice Frosi <afr...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 8:50 PM David Vossel <dvo...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me
 
If this reaches some consensus, could you please help me to request a new repository in the Kubevirt organization :).

What would the repository name be?
Not having a strong imagination, I'd suggest libguestfs-appliance :)

I also need to know in which form we want to publish the appliance. In my opinion, either a tarball or a container image. What would you prefer?

I would prefer a tarball, since containers could only be used as `base` which may have impliciations (like making it hard for projects regarding to container base selection).

Best regards,
Roman

Roman Mohr

unread,
Apr 26, 2021, 5:01:04 AM4/26/21
to Alice Frosi, Fabian Deutsch, Daniel Hiller, David Vossel, Adam Litke, kubevirt-dev
On Mon, Apr 26, 2021 at 10:44 AM Roman Mohr <rm...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 10:42 AM Alice Frosi <afr...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 10:15 AM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, Apr 26, 2021 at 8:54 AM Alice Frosi <afr...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 8:50 PM David Vossel <dvo...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 1:54 PM Adam Litke <ali...@redhat.com> wrote:


On Fri, Apr 23, 2021 at 7:51 AM Alice Frosi <afr...@redhat.com> wrote:
HI, 

I'd like to revive the discussion. If it is still unclear where libguestfs appliance belongs to, we could simply place it in a separate repo in the kubevirt organization.  Additionally, I'm not quite sure which release process should follow.

I think the appliance (and libguestfs container image) should be built in its own repo for several reasons:
  • kubevirt/kubevirt is already too big and slow to merge changes.  We should think hard about adding new related things to it.
  • The CI required for this image doesn't have much to do with kubevirt or CDI so it should probably be unique
  • A dedicated repo can avoid using Bazel which is problematic since it doesn't run rpm scripts
+1 from me
 
If this reaches some consensus, could you please help me to request a new repository in the Kubevirt organization :).

What would the repository name be?
Not having a strong imagination, I'd suggest libguestfs-appliance :)

I also need to know in which form we want to publish the appliance. In my opinion, either a tarball or a container image. What would you prefer?

I would prefer a tarball, since containers could only be used as `base` which may have impliciations (like making it hard for projects regarding to container base selection).

To reiterate how I would see the path forward. That repo would b e called e.g. `libguestfs-appliance` and it would container:

 1. a Dockerfile which creates the tar.gz appliance
 2. the appliance is copied out of the docker container and uploaded to gcs e.g. once a week
 3. the run and upload would be done by a periodic prow job


How does that sound for everyone?

Best regards,
Roman


Adam Litke

unread,
Apr 26, 2021, 8:07:13 AM4/26/21
to Roman Mohr, Alice Frosi, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
This seems to solve the appliance part but, as I understand, Alice wants to build a container image that also contains libguestfs programs which could be used by a virtctl command to provide a utility shell.  How do we plan to build and publish such a container image?

Alice Frosi

unread,
Apr 26, 2021, 8:57:34 AM4/26/21
to Adam Litke, Roman Mohr, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
Adam is right, and this is the next step if we are interested in adding a new command to virtctl. However, once we have the appliance available, the new virtctl command could be part of kubevirt repository. Building an image with libguestfs RPMs plus di appliance is straightforward, and it can easily be achieved with bazel. 

Alice

Alice Frosi

unread,
Apr 29, 2021, 2:12:24 AM4/29/21
to Adam Litke, Roman Mohr, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
Hi,

do we have any other concerns about the new repository libguestfs-appliance? Can I request the creation of it in the Kubevirt organization?

Many thanks,

Alice

Roman Mohr

unread,
Apr 29, 2021, 3:25:57 AM4/29/21
to Alice Frosi, Adam Litke, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Thu, Apr 29, 2021 at 8:12 AM Alice Frosi <afr...@redhat.com> wrote:
Hi,

do we have any other concerns about the new repository libguestfs-appliance? Can I request the creation of it in the Kubevirt organization?

The repository is created and you should have push access to it [1]. Once [2] is merged you can set up prowjobs for it for testing and merging.

Best regards,

Roman Mohr

unread,
Apr 29, 2021, 3:28:27 AM4/29/21
to Alice Frosi, Adam Litke, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
For kubevirt/kubevirt I prefer a published appliance which we can then run from our own container built with bazeldnf. The background is a unified update mechanism in kubevirt regarding to CVEs in RPMs. It turned out to be very cumbersome to get rid of those if we have various base images from various places. It is perfectly fine for me if `kubevirt/libguestfs-appliance ` does both, publishing the appliance and in addition a full container too.

Alice Frosi

unread,
Apr 29, 2021, 7:28:01 AM4/29/21
to Roman Mohr, Adam Litke, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
Great! Many thanks for the support and help!

I created the first setup in the [1], and a PR in project-infra to enable the presubmit job in prow [2].

For now, I'd like to build and be able to publish only the tarball with the libguestfs appliance. Kubevirt is using bazel, so we only need the tarball to cover that case. The tekton-task is already using the tarball for the appliance, so it is not a big change for them if they want to start using the new libguestfs appliance.

Alice Frosi

unread,
May 3, 2021, 3:23:08 AM5/3/21
to Roman Mohr, Adam Litke, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?


Many thanks,

Alice

Adam Litke

unread,
May 3, 2021, 8:42:47 AM5/3/21
to Alice Frosi, Roman Mohr, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.

Alice Frosi

unread,
May 3, 2021, 10:07:46 AM5/3/21
to Adam Litke, Roman Mohr, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.
+1  it sounds good to me.

For this, however, I'll need to have access to a registry in order to publish the image with the libguestfs tool.
I know it is annoying but do you prefer I change the repo to libguestfs-tools instead of libguestfs-appliance? Since it will contain more setup than the libguestfs appliance.

Thanks,

Alice

Fabian Deutsch

unread,
May 3, 2021, 10:08:47 AM5/3/21
to Alice Frosi, Adam Litke, Roman Mohr, Daniel Hiller, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 4:07 PM Alice Frosi <afr...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.
+1  it sounds good to me.

For this, however, I'll need to have access to a registry in order to publish the image with the libguestfs tool.
I know it is annoying but do you prefer I change the repo to libguestfs-tools instead of libguestfs-appliance? Since it will contain more setup than the libguestfs appliance.

+Daniel Hiller can you help with this?
Can we also get Alice a quay erpo if needed?

Daniel Hiller

unread,
May 3, 2021, 10:22:30 AM5/3/21
to Fabian Deutsch, Alice Frosi, Adam Litke, Roman Mohr, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 4:08 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:07 PM Alice Frosi <afr...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.
+1  it sounds good to me.

For this, however, I'll need to have access to a registry in order to publish the image with the libguestfs tool.
I know it is annoying but do you prefer I change the repo to libguestfs-tools instead of libguestfs-appliance? Since it will contain more setup than the libguestfs appliance.

+Daniel Hiller can you help with this?

Sure! Alice, you can just create a PR to rename the repository as described in the community docs: https://github.com/kubevirt/community/blob/master/docs/automating-github-org-management.md
 
Can we also get Alice a quay erpo if needed?

Sure! Alice, please tell me how I should name the quay repo.


--

Kind regards,


Daniel Hiller

He / Him / His

Senior Software Engineer, OpenShift Virtualization

Red Hat

dhi...@redhat.com   

Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, 
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill

Daniel Hiller

unread,
May 3, 2021, 10:23:29 AM5/3/21
to Fabian Deutsch, Alice Frosi, Adam Litke, Roman Mohr, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 4:22 PM Daniel Hiller <dhi...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:08 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:07 PM Alice Frosi <afr...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.
+1  it sounds good to me.

For this, however, I'll need to have access to a registry in order to publish the image with the libguestfs tool.
I know it is annoying but do you prefer I change the repo to libguestfs-tools instead of libguestfs-appliance? Since it will contain more setup than the libguestfs appliance.

+Daniel Hiller can you help with this?

Sure! Alice, you can just create a PR to rename the repository as described in the community docs: https://github.com/kubevirt/community/blob/master/docs/automating-github-org-management.md

And please do not forget to change the team access to the repo in the teams section also.

Alice Frosi

unread,
May 4, 2021, 2:11:04 AM5/4/21
to Daniel Hiller, Fabian Deutsch, Adam Litke, Roman Mohr, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 4:23 PM Daniel Hiller <dhi...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:22 PM Daniel Hiller <dhi...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:08 PM Fabian Deutsch <fdeu...@redhat.com> wrote:


On Mon, May 3, 2021 at 4:07 PM Alice Frosi <afr...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.
+1  it sounds good to me.

For this, however, I'll need to have access to a registry in order to publish the image with the libguestfs tool.
I know it is annoying but do you prefer I change the repo to libguestfs-tools instead of libguestfs-appliance? Since it will contain more setup than the libguestfs appliance.

+Daniel Hiller can you help with this?

Sure! Alice, you can just create a PR to rename the repository as described in the community docs: https://github.com/kubevirt/community/blob/master/docs/automating-github-org-management.md

And please do not forget to change the team access to the repo in the teams section also.
Yes, thanks! I'll open a PR :)
 
 
 
Can we also get Alice a quay erpo if needed?

Sure! Alice, please tell me how I should name the quay repo.
Image name: quay,io/kubevirt/libguestfs-tools 

Roman Mohr

unread,
May 4, 2021, 2:54:28 AM5/4/21
to Adam Litke, Alice Frosi, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.

As said before, I don't want to introduce new base images to kubevirt. That makes updating complicated.

Best regards,
Roman

Roman Mohr

unread,
May 4, 2021, 3:00:46 AM5/4/21
to Adam Litke, Alice Frosi, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Tue, May 4, 2021 at 8:54 AM Roman Mohr <rm...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.

As said before, I don't want to introduce new base images to kubevirt. That makes updating complicated.

To clarify, I don't mind if a container is built in that repo too, but I expect that it will not be used in kubevirt/kubevirt.

Alice Frosi

unread,
May 4, 2021, 4:49:34 AM5/4/21
to Roman Mohr, Adam Litke, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
On Tue, May 4, 2021 at 9:00 AM Roman Mohr <rm...@redhat.com> wrote:


On Tue, May 4, 2021 at 8:54 AM Roman Mohr <rm...@redhat.com> wrote:


On Mon, May 3, 2021 at 2:42 PM Adam Litke <ali...@redhat.com> wrote:


On Mon, May 3, 2021 at 3:23 AM Alice Frosi <afr...@redhat.com> wrote:
I opened a PRs that allows us to build periodically the libguestfs appliance [1]. Once this is ready, we can start using it and build a container with libguestfs tools.
Hence, the next step is to build the container image with libguestfs tool (using bazel). My question is where do we want to host the container image setup. In Kubevirt repo or is it better to integrate that directly in the libguestfs-appliance repository?

My preference would be to build it in this new repo instead of kubevirt.

As said before, I don't want to introduce new base images to kubevirt. That makes updating complicated.

To clarify, I don't mind if a container is built in that repo too, but I expect that it will not be used in kubevirt/kubevirt.
 
The current use case I'm trying to cover is to build a container image to be used with virtctl and integrated into kubevirt main repository. I'd like to keep the setup simple and add the configuration incrementally when needed. If we don't want to host the image separately for this use case, then I'll prefer to keep the repository only with the libguestfs appliance. Nothing prevents us in the future to add the image there if there are more use cases than the virtctl guestfs command. Please, let me know if we all agree on this.

Alice

Adam Litke

unread,
May 4, 2021, 2:40:38 PM5/4/21
to Alice Frosi, Roman Mohr, Fabian Deutsch, Daniel Hiller, David Vossel, kubevirt-dev
Works for me. 
Reply all
Reply to author
Forward
0 new messages