extending qemu.conf

114 views
Skip to first unread message

Alexander Gallego

unread,
May 24, 2018, 6:56:43 PM5/24/18
to kubevirt-dev
Hi guys!

I'd like to pass some specific libvirtd configs and qemu configs to the virt-launcher container. 

I really like the interfaces you have created from the virt-handler -> virt-launcher  via filesystem 

Something similar to this:

```
/var/run/kubernetes/private/${namespace}/pid

```

A few settings that'd be worth (necessary in my case) exposing either via the vm.yml or somehow would be:

1) the networking speeds
2) system clocks
3) system infos
4) cpu sockets (cpu topology)

I noticed that we start libvirt in libvirtd.sh in libvirtd_helpers.go

```
func StartLibvirt(stopChan chan struct{}) {
// we spawn libvirt from virt-launcher in order to ensure the libvirtd+qemu process
// doesn't exit until virt-launcher is ready for it to. Virt-launcher traps signals
// to perform special shutdown logic. These processes need to live in the same
// container.

go func() {
for {
exitChan := make(chan struct{})
cmd := exec.Command("/libvirtd.sh")

err := cmd.Start()
if err != nil {
log.Log.Reason(err).Error("failed to start libvirtd")
panic(err)
}

go func() {
defer close(exitChan)
cmd.Wait()
}()

select {
case <-stopChan:
cmd.Process.Kill()
return
case <-exitChan:
log.Log.Errorf("libvirtd exited, restarting")
}

// this sleep is to avoid consumming all resources in the
// event of a libvirtd crash loop.
time.Sleep(time.Second)
}
}()
}

```

Would you be interested in a design proposal to figure out a way to pass these arguments to the startup script (possibly rewriting it in go vs a shell script). 

If my InitContainers proposal goes in, it makes it really easy to read system config files placed in by the InitContainers and et-voila! 

However, I wanted to start a discussion regardless of whether or not that proposal is accepted. 

Is there any design decisions on how to pass in these configurations parameters to qemu.conf basically? 

Thank you!

.alex




Itamar Heim

unread,
May 24, 2018, 7:25:16 PM5/24/18
to Alexander Gallego, kubevirt-dev
So this is exactly the use case of custom hooks[1].
But it again goes to what do we allow a user to do.
in the hooks example, the admin had to install/enable them, and only
then could a user use them (not sure about even that, or it required the
admin to set them).
Point is, there is a difference between the admin enabling specific
fields to be manipulated by the user, to allowing the user to pass any
xml to libvirt, which could be abused.

So there are two parts to this in my mind:
1. The flow in which we allow a container (init or otherwise) to get the
libvirt xml and manipulate it.

2. how do we control who can do it.
For example, the admin could be the one specifying which such containers
could be used, and the user is limited in choosing from those enabled by
the admin, or some other admission control aspect.

(also, notice the containers may be called in multiple "hooks", not just
before starting a VM).

just some thoughts...

Thanks,
Itamar

[1] https://github.com/oVirt/vdsm/tree/master/vdsm_hooks

Itamar Heim

unread,
May 24, 2018, 8:01:08 PM5/24/18
to Alexander Gallego, kubevirt-dev
The user wouldn't choose which containers, rather would provide
additional info in the VM yaml, which the containers would read and use
to manipulate the xml with.
This isn't the only way to do this (I'm just describing an approach that
worked in ovirt for the same use case), so some brainstorming on this is
in order as there are more options/conventions in kubernetes.

Steve Gordon

unread,
May 24, 2018, 8:20:43 PM5/24/18
to Itamar Heim, Alexander Gallego, kubevirt-dev
Possibly required PowerUserRole? I can't remember either though.

> Point is, there is a difference between the admin enabling specific fields
> to be manipulated by the user, to allowing the user to pass any xml to
> libvirt, which could be abused.
>
> So there are two parts to this in my mind:
> 1. The flow in which we allow a container (init or otherwise) to get the
> libvirt xml and manipulate it.
>
> 2. how do we control who can do it.
> For example, the admin could be the one specifying which such containers
> could be used, and the user is limited in choosing from those enabled by the
> admin, or some other admission control aspect.

Indeed, and in fact I suspect ideally we might want to even expand on
this somewhat:

1) As an admin I want to enable a specific custom hook that a specific
subset of users can use (what you describe).
2) As an admin I want to enable a specific custom hook that applies to
all VMs launched in the cluster (no opt-out)?
3) As an admin I want to enable a specific custom hook that applies to
all VMs launched in the cluster from a specific preset?

> (also, notice the containers may be called in multiple "hooks", not just
> before starting a VM).

+1. The other aspect IIRC was that hooks weren't just restricted to
modifying the XML though...which might pose some interesting things to
think about when we consider those that modified the host?

-Steve

> just some thoughts...
>
> Thanks,
> Itamar
>
> [1] https://github.com/oVirt/vdsm/tree/master/vdsm_hooks
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubevirt-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubevirt-dev...@googlegroups.com.
> To post to this group, send email to kubevi...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubevirt-dev/4f9a2bed-2908-e251-4ad5-a231fc823fb2%40redhat.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Stephen Gordon,
Principal Product Manager,
Red Hat

Itamar Heim

unread,
May 24, 2018, 8:56:49 PM5/24/18
to Steve Gordon, Alexander Gallego, kubevirt-dev
On 05/24/2018 08:20 PM, Steve Gordon wrote:
> +1. The other aspect IIRC was that hooks weren't just restricted to
> modifying the XML though...which might pose some interesting things to
> think about when we consider those that modified the host?

yes, these would be trickier for containers.
actually, a lot of those changing the libvirt xml also influenced the
host, so not sure yet how this would work.
for example, something simple - cpu pinning - is that something a
process inside a pod can do outside of what the kubernetes cpu pinning
defines (if/when).

but seems like we're assuming user accessible environment, and Alexander
may have a restricted one, where this is just a flexible backend which
would be a bit more flexible.

Alexander Gallego

unread,
May 25, 2018, 5:28:35 PM5/25/18
to Itamar Heim, Steve Gordon, kubevirt-dev
Thanks for the follow up. So let me try and summarize: 

We need to extend the framework in 2 ways: 

1) Pre VM Launch - so we can perform bootstrapping steps
2) Passing arguments to the VM - via filesystem or other wise. 

There exists 5-6 possible alternatives to support the 'SuperUser' mode that is allowed to do these steps: 

1) Kubernetes RBAC - 
2) Kubernetes ABAC - I think we can make it work with ABAC's
3) CRD's for AdminVMs
4) Admissions Controllers
5) Kubevirt whitelisted hooks that cover all of the stated usecases (maybe through code generation of all config settings supported by libvirtd??)

6) Not support these use cases from Kubevirt's point of view.


On a pull request I just added the simple ability to add InitContainers as I don't see other ways that are native to k8s. i.e.: to support alternatives you basically end up re-implementing much of the initContainers functionality. 

I am happy and looking forward to figure out the next steps. 

What do you guys want to see / learn from in order to move forward with any of the 6 directions given the 2 requirements. 

Would a design doc suffice? I think the Pull Request to support the role based access control might be bigger than I anticipated, but I'm happy to investigate if the team decides it is the right direction. 

I'd love to get a Pull Request / Design doc for passing extra arguments to qemu.conf after. 

Let me know ! 

(maybe community poll ? ha not sure - any suggestions on how to get some data/opinions on the options to be exposed)



Itamar Heim

unread,
May 25, 2018, 6:49:19 PM5/25/18
to Alexander Gallego, Steve Gordon, kubevirt-dev
On 05/25/2018 05:28 PM, Alexander Gallego wrote:
>
> On Thu, May 24, 2018 at 8:56 PM, Itamar Heim <ih...@redhat.com
> <mailto:ih...@redhat.com>> wrote:
>
> On 05/24/2018 08:20 PM, Steve Gordon wrote:
>
> +1. The other aspect IIRC was that hooks weren't just restricted to
> modifying the XML though...which might pose some interesting
> things to
> think about when we consider those that modified the host?
>
>
> yes, these would be trickier for containers.
> actually, a lot of those changing the libvirt xml also influenced
> the host, so not sure yet how this would work.
> for example, something simple - cpu pinning - is that something a
> process inside a pod can do outside of what the kubernetes cpu
> pinning defines (if/when).
>
> but seems like we're assuming user accessible environment, and
> Alexander may have a restricted one, where this is just a flexible
> backend which would be a bit more flexible.
>
>
>
> Thanks for the follow up. So let me try and summarize:
>
> We need to extend the framework in 2 ways:
>
> 1) Pre VM Launch - so we can perform bootstrapping steps

I'd generalize for various life cycle phases, not just BeforeStartVM.

> 2) Passing arguments to the VM - via filesystem or other wise.

why not via cloud-init?

>
> There exists 5-6 possible alternatives to support the 'SuperUser' mode
> that is allowed to do these steps:
>
> 1) Kubernetes RBAC -
> 2) Kubernetes ABAC - I think we can make it work with ABAC's
> 3) CRD's for AdminVMs
> 4) Admissions Controllers
> 5) Kubevirt whitelisted hooks that cover all of the stated usecases
> (maybe through code generation of all config settings supported by
> libvirtd??)
>
> 6) Not support these use cases from Kubevirt's point of view.
>

I think we do want to support this.
Its just how do we let the admin define the containers for the phases we
want to support so they are called in the right place (our own
mechanism, webhooks, etc.)
then how the user can pass extra parameters in the vm yaml to influence
their run).

>
> On a pull request I just added the simple ability to add InitContainers
> as I don't see other ways that are native to k8s. i.e.: to support
> alternatives you basically end up re-implementing much of the
> initContainers functionality.
>
> I am happy and looking forward to figure out the next steps.
>
> What do you guys want to see / learn from in order to move forward with
> any of the 6 directions given the 2 requirements.
>
> Would a design doc suffice? I think the Pull Request to support the role
> based access control might be bigger than I anticipated, but I'm happy
> to investigate if the team decides it is the right direction.
>
> I'd love to get a Pull Request / Design doc for passing extra arguments
> to qemu.conf after.

I think the idea was you can put a topic on the weekly meeting in
advance, so relevant interested parties join it to discuss together
(other than email threads/docs) if needed?

Fabian Deutsch

unread,
May 26, 2018, 9:01:57 AM5/26/18
to Alexander Gallego, Itamar Heim, Steve Gordon, kubevirt-dev
On Fri, May 25, 2018 at 11:28 PM, Alexander Gallego <galleg...@gmail.com> wrote:

On Thu, May 24, 2018 at 8:56 PM, Itamar Heim <ih...@redhat.com> wrote:
On 05/24/2018 08:20 PM, Steve Gordon wrote:
+1. The other aspect IIRC was that hooks weren't just restricted to
modifying the XML though...which might pose some interesting things to
think about when we consider those that modified the host?


Jumping in here a little.

The domxml of libvirt, and libvirtD config parameters are probably something we do not want to formally expose in KubevIrt (at least ATM).

So far any hooks or alike KubeVirt should support should work on the VM SPec and other KubeVirt objects - aka on the cluster level.

The domxml is an implementation detail.

However, I do recognize the need to modify them. And I still think that initContainers are an interesting idea.

In general I do see two patterns which might help us:

- Life-cycle hooks on the cluster level, triggering webhooks - just likes it's done for other Kube parts (i.e. with custom admission controllers) see slide 21 of https://www.slideshare.net/sttts/kubecon-eu-2018-sig-api-machinery-deep-dive/21
This could be used to allow a user to modify the YAML

- Life-cycle hooks on the pod level, using gRPC just like it's done for i.e. device plugins or cri. The hook itself could be delivered as container, hooking into the life-cycle events.
This could be used to allow the user to modify the pod and domxml

 
yes, these would be trickier for containers.
actually, a lot of those changing the libvirt xml also influenced the host, so not sure yet how this would work.
for example, something simple - cpu pinning - is that something a process inside a pod can do outside of what the kubernetes cpu pinning defines (if/when).

but seems like we're assuming user accessible environment, and Alexander may have a restricted one, where this is just a flexible backend which would be a bit more flexible.


Thanks for the follow up. So let me try and summarize: 

We need to extend the framework in 2 ways: 

1) Pre VM Launch - so we can perform bootstrapping steps

Could you explain this case a little more?

I can understand it in a few ways:
- Run something on the first start of a VM which will be started again
- Run something whenevr a specific VM starts
 
2) Passing arguments to the VM - via filesystem or other wise. 


What kind of arguments would you like to pass?
Who should receive the arguments?

(On that side it would be cool to gain ConfigMap and Secret support for VMs - to pas them into the VM, just like with pods).

- fabian

Alexander Gallego

unread,
May 29, 2018, 10:38:58 AM5/29/18
to kubevirt-dev


On Saturday, May 26, 2018 at 9:01:57 AM UTC-4, Fabian Deutsch wrote:


On Fri, May 25, 2018 at 11:28 PM, Alexander Gallego <galleg...@gmail.com> wrote:

On Thu, May 24, 2018 at 8:56 PM, Itamar Heim <ih...@redhat.com> wrote:
On 05/24/2018 08:20 PM, Steve Gordon wrote:
+1. The other aspect IIRC was that hooks weren't just restricted to
modifying the XML though...which might pose some interesting things to
think about when we consider those that modified the host?


Jumping in here a little.

The domxml of libvirt, and libvirtD config parameters are probably something we do not want to formally expose in KubevIrt (at least ATM).

So far any hooks or alike KubeVirt should support should work on the VM SPec and other KubeVirt objects - aka on the cluster level.

The domxml is an implementation detail.

However, I do recognize the need to modify them. And I still think that initContainers are an interesting idea.

In general I do see two patterns which might help us:

- Life-cycle hooks on the cluster level, triggering webhooks - just likes it's done for other Kube parts (i.e. with custom admission controllers) see slide 21 of https://www.slideshare.net/sttts/kubecon-eu-2018-sig-api-machinery-deep-dive/21
This could be used to allow a user to modify the YAML

- Life-cycle hooks on the pod level, using gRPC just like it's done for i.e. device plugins or cri. The hook itself could be delivered as container, hooking into the life-cycle events.
This could be used to allow the user to modify the pod and domxml

 

right, this is one of the 6 options i listed
 
yes, these would be trickier for containers.
actually, a lot of those changing the libvirt xml also influenced the host, so not sure yet how this would work.
for example, something simple - cpu pinning - is that something a process inside a pod can do outside of what the kubernetes cpu pinning defines (if/when).

but seems like we're assuming user accessible environment, and Alexander may have a restricted one, where this is just a flexible backend which would be a bit more flexible.


Thanks for the follow up. So let me try and summarize: 

We need to extend the framework in 2 ways: 

1) Pre VM Launch - so we can perform bootstrapping steps

Could you explain this case a little more?

I can understand it in a few ways:
- Run something on the first start of a VM which will be started again
- Run something whenevr a specific VM starts
 
2) Passing arguments to the VM - via filesystem or other wise. 


What kind of arguments would you like to pass?

Just the list I posted on the first email of this thread:

1) the networking speeds
2) system clocks
3) system infos
4) cpu sockets (cpu topology)
 
 
Who should receive the arguments?


Depends.

1) For the qemu.conf params above - libvirtd 

2) For oustide world bridging with inside world (say from hardware to kubevirt-launched-vm) - then the InitContainer

I see this as a 2 step process.

a) InitContainers or the like produce a resource/config/changes/devices currently not exposed yet - there will likely always be a need for this w/ VMs or at least for a while
b) Those arguments have to be passed to the libvirtd eventually
 
(On that side it would be cool to gain ConfigMap and Secret support for VMs - to pas them into the VM, just like with pods).


agreed
 
- fabian

Alexander Gallego

unread,
May 29, 2018, 10:49:46 AM5/29/18
to kubevirt-dev


On Friday, May 25, 2018 at 6:49:19 PM UTC-4, Itamar Heim wrote:
On 05/25/2018 05:28 PM, Alexander Gallego wrote:
>
> On Thu, May 24, 2018 at 8:56 PM, Itamar Heim <ih...@redhat.com
> <mailto:ih...@redhat.com>> wrote:
>
>     On 05/24/2018 08:20 PM, Steve Gordon wrote:
>
>         +1. The other aspect IIRC was that hooks weren't just restricted to
>         modifying the XML though...which might pose some interesting
>         things to
>         think about when we consider those that modified the host?
>
>
>     yes, these would be trickier for containers.
>     actually, a lot of those changing the libvirt xml also influenced
>     the host, so not sure yet how this would work.
>     for example, something simple - cpu pinning - is that something a
>     process inside a pod can do outside of what the kubernetes cpu
>     pinning defines (if/when).
>
>     but seems like we're assuming user accessible environment, and
>     Alexander may have a restricted one, where this is just a flexible
>     backend which would be a bit more flexible.
>
>
>
> Thanks for the follow up. So let me try and summarize:
>
> We need to extend the framework in 2 ways:
>
> 1) Pre VM Launch - so we can perform bootstrapping steps

I'd generalize for various life cycle phases, not just BeforeStartVM.


I agree! - though right now, I only need InitContainers-like-functionality
 
> 2) Passing arguments to the VM - via filesystem or other wise.

why not via cloud-init?


Well, this has to happen *before* the VM is started no? 

for example how to set the ethernet speeds - you can do this easily via qemu args.

This way the initContainers can just write a config to /etc/qemu/cli_args.xml 

 
>
> There exists 5-6 possible alternatives to support the 'SuperUser' mode
> that is allowed to do these steps:
>
> 1) Kubernetes RBAC -
> 2) Kubernetes ABAC - I think we can make it work with ABAC's
> 3) CRD's for AdminVMs
> 4) Admissions Controllers
> 5) Kubevirt whitelisted hooks that cover all of the stated usecases
> (maybe through code generation of all config settings supported by
> libvirtd??)
>
> 6) Not support these use cases from Kubevirt's point of view.
>

I think we do want to support this.

Yay!! 
 
Its just how do we let the admin define the containers for the phases we
want to support so they are called in the right place (our own
mechanism, webhooks, etc.)

so we can do:

1) admissions controllers for now
2) ABAC and/or property based access controll

Thoughts? 
 
then how the user can pass extra parameters in the vm yaml to influence
their run).


There are 2 things here:

1) static parameters - i.e.: always launch this VM w/ 1G throttled speed

2) dynamic parameters - i.e.: change the MAC of this eth0 device *at* runtime based on some host configuration

For static params, they can be easily exposed in the yml, for dynamic params, a filesystem based argument passing sounds easy - iff initcontainers is approved. 

i.e.: just push a file to  /etc/qemu/my_args.xml 


I think i have a way to move forward with this proposal! - a 2-step proposal

1) Add new design doc - that describes the technical tradeoffs.
2) add it to the kubevirt Monday meetings agenda for discussion

Given that this is an architectural change, would the community be interested in a  more scoped brainstorming session - outside of the admin workflow of Mondays? 

Thoughts @itamar?

I would vote for a 30min scoped brainstorming session, followed by an updated design doc, followed by a Kubevirt weekly meeting agenda discussion.

Let me know!

.alex

Fabian Deutsch

unread,
May 31, 2018, 6:36:03 AM5/31/18
to Alexander Gallego, kubevirt-dev
Yep, that's a good idea.

Let's take it offline to find a slot and people to do this.

Greetings
fabian
 

>
> Let me know !
>
> (maybe community poll ? ha not sure - any suggestions on how to get some
> data/opinions on the options to be exposed)
>
>
>

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev+unsubscribe@googlegroups.com.

To post to this group, send email to kubevi...@googlegroups.com.

dvo...@redhat.com

unread,
Jun 4, 2018, 4:46:02 PM6/4/18
to kubevirt-dev
I just wanted to point out that initContainers are possible regardless of KubeVirt's support of the feature by using a mutating webhook.


Users can register a mutating webhook that looks for virt-launcher pods, and then inject their initContainers. That can be done right now without kubevirt officially supporting anything. 
Reply all
Reply to author
Forward
0 new messages