Virt luncher behavior

21 views
Skip to first unread message

Eran Ifrach

unread,
Mar 16, 2021, 5:25:53 AM3/16/21
to kubevirt-dev
hey all,
I have a small question regarding the behavior of the virtual luncher pod

when you run a VM and shutdown the connected pod (i.e. virt luncher) is deleted

but, we I use the "runStrategy: Manual" and from the VM I ran shutdown the pod is not deleted and stay's in "completed"  state

is this the right behavior ? should the pod be delete as well ?
here some output

╰─$ oc get vmi
NAME                       AGE   PHASE       IP             NODENAME
windows2019-installation   46h   Succeeded   10.131.1.114   worker1.green

╰─$ oc get pod
NAME                                           READY   STATUS      RESTARTS   AGE
virt-launcher-windows2019-installation-54dck   0/2     Completed   0          46h

thanks in advance 




Stu Gott

unread,
Mar 16, 2021, 11:13:52 AM3/16/21
to Eran Ifrach, kubevirt-dev
On Tue, Mar 16, 2021 at 5:26 AM Eran Ifrach <eif...@redhat.com> wrote:
hey all,
I have a small question regarding the behavior of the virtual luncher pod

when you run a VM and shutdown the connected pod (i.e. virt luncher) is deleted

but, we I use the "runStrategy: Manual" and from the VM I ran shutdown the pod is not deleted and stay's in "completed"  state

is this the right behavior ? should the pod be delete as well ?

I was aware this behavior exists for RerunOnFailure. The rationale in that case:

If the pod were to be deleted in this case, then virt-controller has no way of being able to distinguish between "the guest was intentionally shut down" and "the guest crashed" -- those need to be treated differently for that RunStrategy. Hence the pod is kept perpetually in the case of graceful shutdown -- in order to prevent a new virt-launcher pod from being immediately created.

I just did a code dive into the virt-controller code responsible for this and what you observed is indeed the case. The only time the pod is deleted or created is if an API call explicitly requests it. State changes initiated from within the VMI (e.g. request the OS to shut down) don't cause changes to the pod.

I'd have to think on it to decide if this is expected / intentional--or just an oversight. As I alluded above, if it is something we did on purpose it very likely has to do with the pod acting as a placeholder. At the very least letting the pod persist is useful for debugging in case of an accidental crash. Other RunStrategies would immediately destroy it--and the logs along with it.

 
here some output

╰─$ oc get vmi
NAME                       AGE   PHASE       IP             NODENAME
windows2019-installation   46h   Succeeded   10.131.1.114   worker1.green

╰─$ oc get pod
NAME                                           READY   STATUS      RESTARTS   AGE
virt-launcher-windows2019-installation-54dck   0/2     Completed   0          46h

thanks in advance 




--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/e4d2922c-ce19-4bba-b139-63808e3a3578n%40googlegroups.com.

Eran Ifrach

unread,
Mar 17, 2021, 11:16:14 AM3/17/21
to kubevirt-dev
Hi Stu,
thanks for the face replay.

The reason I'm asking this because I tried to clone the PVC, using Data volume and it was "on-hold" because the PVC is attached to the pod ( the PVC is configured with RWO )

do you think that the DV clone should ran if the POD is not running 

here is an example of the cloning 

                      apiVersion: cdi.kubevirt.io/v1alpha1
                      kind: DataVolume
                      metadata:
                        name: win2k19
                        namespace: openshift-virtualization-os-images
                      spec:
                        source:
                          pvc:
                            namespace: "default"
                            name: "pod-root-disk"
                        pvc:
                          accessModes:
                            - ReadWriteOnce
                          resources:
                            requests:
                              storage: 12Gi

Alexander Wels

unread,
Mar 17, 2021, 11:19:30 AM3/17/21
to Eran Ifrach, kubevirt-dev
On Wed, Mar 17, 2021 at 11:16 AM Eran Ifrach <eif...@redhat.com> wrote:
Hi Stu,
thanks for the face replay.

The reason I'm asking this because I tried to clone the PVC, using Data volume and it was "on-hold" because the PVC is attached to the pod ( the PVC is configured with RWO )

do you think that the DV clone should ran if the POD is not running 

here is an example of the cloning 

                      apiVersion: cdi.kubevirt.io/v1alpha1
                      kind: DataVolume
                      metadata:
                        name: win2k19
                        namespace: openshift-virtualization-os-images
                      spec:
                        source:
                          pvc:
                            namespace: "default"
                            name: "pod-root-disk"
                        pvc:
                          accessModes:
                            - ReadWriteOnce
                          resources:
                            requests:
                              storage: 12Gi


We have explicit code that checks if a PVC is in use, and the clone won't run if it is.
 
Reply all
Reply to author
Forward
0 new messages