Virtctl image-upload throwing Permission denied error for path /data/disk.img

1,549 views
Skip to first unread message

Ranjeet Kharade IN

unread,
Oct 5, 2023, 2:00:33 AM10/5/23
to kubevirt-dev
Hi All,

I am facing a permission denied error while uploading, anyone faced simillar error before:
 
virtctl image-upload pvc win11cd-pvc --access-mode=ReadWriteMany --size 25Gi --image-path=/home/devops/Win11.qcow2  --insecure --storage-class=blossom-retain --uploadproxy-url=https://XX.XX.XX.XX:443 

Error :
PVC default/win11cd-pvc not found
PersistentVolumeClaim default/win11cd-pvc created
Waiting for PVC win11cd-pvc upload pod to be ready...
Pod now ready
Uploading data to  https://XX.XX.XX.XX:443 

 1.97 MiB / 5.44 GiB [>---------------------------------------------------------------------------------------------------------------------------]   0.04% 0s

unexpected return value 500, Saving stream failed: Unable to transfer source data to target file: stat /data/disk.img: permission denied

k logs -f cdi-upload-win11cd-pvc
I1005 05:43:47.037809       1 uploadserver.go:74] Running server on 0.0.0.0:8443
I1005 05:43:49.006272       1 uploadserver.go:330] Content type header is ""
I1005 05:43:49.006297       1 data-processor.go:356] Calculating available size
E1005 05:43:49.006868       1 data-processor.go:360] stat /data/disk.img: permission denied
I1005 05:43:49.006904       1 data-processor.go:368] Checking out file system volume size.
E1005 05:43:49.007280       1 data-processor.go:371] permission denied
I1005 05:43:49.007287       1 data-processor.go:376] Request image size not empty.
I1005 05:43:49.007300       1 data-processor.go:381] Target size -1.
I1005 05:43:49.007378       1 data-processor.go:255] New phase: TransferDataFile
E1005 05:43:49.007769       1 data-processor.go:251] stat /data/disk.img: permission denied
Unable to transfer source data to target file
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).initDefaultPhases.func4
        pkg/importer/data-processor.go:200
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause
        pkg/importer/data-processor.go:248
kubevirt.io/containerized-data-importer/pkg/uploadserver.newAsyncUploadStreamProcessor
        pkg/uploadserver/uploadserver.go:443
kubevirt.io/containerized-data-importer/pkg/uploadserver.(*uploadServerApp).uploadHandlerAsync.func1
        pkg/uploadserver/uploadserver.go:337
net/http.HandlerFunc.ServeHTTP
        GOROOT/src/net/http/server.go:2109
net/http.(*ServeMux).ServeHTTP
        GOROOT/src/net/http/server.go:2487
kubevirt.io/containerized-data-importer/pkg/uploadserver.(*uploadServerApp).ServeHTTP
        pkg/uploadserver/uploadserver.go:264
net/http.serverHandler.ServeHTTP
        GOROOT/src/net/http/server.go:2947
net/http.(*conn).serve
        GOROOT/src/net/http/server.go:1991
runtime.goexit
        GOROOT/src/runtime/asm_amd64.s:1594

Alex Kalenyuk

unread,
Oct 5, 2023, 4:22:53 AM10/5/23
to kubevirt-dev
Hey, thanks for reporting, which version of kubevirt & CDI is this?
If you could also attach the definition of `blossom-retain` storage class, just want to make sure it supports fsGroup

Ranjeet Kharade IN

unread,
Oct 5, 2023, 4:35:13 AM10/5/23
to kubevirt-dev
  • CDI version : v1.57.0
  • Kubernetes version : 1.22.8
  • KubeVirt v1.0.0

k describe sc blossom-retain

Name:                  blossom-retain
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           csi.trident.netapp.io
Parameters:            backendType=ontap-nas,media=hdd,provisioningType=thin
AllowVolumeExpansion:  <unset>
MountOptions:
  rw
  nfsvers=3
  proto=tcp
ReclaimPolicy:      Retain
VolumeBindingMode:  Immediate
Events:             <none>

~$ k get CSIDriver
NAME                    ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
csi.trident.netapp.io   true             false            false             <unset>         false               Persistent   19h



$ k describe CSIDriver csi.trident.netapp.io
Name:         csi.trident.netapp.io
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2023-10-04T13:11:10Z
  Managed Fields:
    API Version:  storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        f:attachRequired:
        f:fsGroupPolicy:
        f:podInfoOnMount:
        f:requiresRepublish:
        f:storageCapacity:
        f:volumeLifecycleModes:
          .:
          v:"Persistent":
    Manager:         tridentctl
    Operation:       Update
    Time:            2023-10-04T13:11:10Z
  Resource Version:  133615864
  UID:               3c4ae5d3-a115-4d2c-b4a8-62aa91d953b7
Spec:
  Attach Required:     true
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   false
  Requires Republish:  false
  Storage Capacity:    false
  Volume Lifecycle Modes:
    Persistent
Events:  <none>

Alex Kalenyuk

unread,
Oct 5, 2023, 4:51:03 AM10/5/23
to kubevirt-dev
So one potential issue I see is the k8s version (1.22).
KubeVirt supports 3 Kubernetes versions at all times, for 1.0 we support 1.25-1.27.

I would also double-check that you are able to write to a blossom-retain PVC as non root user,
with something like https://pastebin.mozilla.org/e3ZGGKwt
followed by 'kubectl exec' and trying to write a file under /disk

Ranjeet Kharade IN

unread,
Oct 5, 2023, 5:02:08 AM10/5/23
to kubevirt-dev
I appreciate your time, Alex. Would you kindly inform me which version of Kubevirt I should consider for use with Kubernetes 1.22?

So I can try the same and see whether it's working there or not.

Also, in the current setup, I am not even able to access the path /data/ of by using Kubectl exec of Pod cdi-upload-XXX.

Alexander Wels

unread,
Oct 5, 2023, 8:26:17 AM10/5/23
to Ranjeet Kharade IN, kubevirt-dev
The mapping between Kubernetes/CDI/KubeVirt is roughly this:

k8s:1.27 (1.26/125)  CDI: 1.57  KubeVirt: 1.0.x
k8s:1.26 (1.25/1.24) CDI: 1.56  KubeVirt: 0.59.x
k8s: 1.25 (1.24/1.23) CDI: 1.55 KubeVirt: 0.58.x
k8s: 1.24 (1.23/1.22) CDI: 1.49 KubeVirt: 0.53.x

If you are using 1.22.x I would suggest using CDI 1.49 and KubeVirt 0.53.x

One more thing that looks suspicious is
- nfsvers=3

CDI doesn't support NFSv3 due to some issues with it.

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/3013b36f-4b2b-4df9-971d-414d9f1c4538n%40googlegroups.com.

Ranjeet Kharade IN

unread,
Oct 6, 2023, 6:56:01 AM10/6/23
to kubevirt-dev
Hi  Alexander ,

I have tried the below-mentioned versions  CDI 1.49 and KubeVirt 0.53.x.

But this time I am getting a timeout error, also I couldn't find the upload pod in my namespace though PVC has been created successfully and bounded.

virtctl image-upload pvc win11cd-pvc --access-mode=ReadWriteMany --size 10Gi --image-path=/home/devops/Win11_22H2_English_x64v2.iso  --storage-class=blossom-standard

PVC default/win11cd-pvc not found
PersistentVolumeClaim default/win11cd-pvc created
Waiting for PVC win11cd-pvc upload pod to be ready...
timed out waiting for the condition

Also, I can see only the CDI operator Pod is running in CDI namespace

k get pod -n cdi
NAME                            READY   STATUS    RESTARTS   AGE
cdi-operator-6c789c4bc5-8g8ms   1/1     Running   0          38m

Alexander Wels

unread,
Oct 6, 2023, 8:09:45 AM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
If you are just seeing the cdi-operator in the cdi namespace, then either the CDI CR was not created (which deploys CDI) or there is some error that should be visible in the operator logs that prevent it from deploying CDI.

Ranjeet Kharade IN

unread,
Oct 6, 2023, 8:40:41 AM10/6/23
to kubevirt-dev
I can see below error in the operator logs :

{"level":"error","ts":1696594240.0308707,"logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found",

I have just followed the below steps to deploy CDI, anything else I am missing here?

kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

Michael Henriksen

unread,
Oct 6, 2023, 11:02:58 AM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
On Fri, Oct 6, 2023 at 8:40 AM 'Ranjeet Kharade IN' via kubevirt-dev <kubevi...@googlegroups.com> wrote:
I can see below error in the operator logs :

{"level":"error","ts":1696594240.0308707,"logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found",

Ranjeet Kharade IN

unread,
Oct 6, 2023, 11:37:56 AM10/6/23
to kubevirt-dev
Still, It didn't help, I deleted everything related to CDI including the namespace but still, I can see a below error message in the logs in the operator logs and this is only pod I can see under cdi namespace.

{"level":"error","ts":"2023-10-06T15:34:29Z","logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found",

Michael Henriksen

unread,
Oct 6, 2023, 11:49:29 AM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
On Fri, Oct 6, 2023 at 11:38 AM 'Ranjeet Kharade IN' via kubevirt-dev <kubevi...@googlegroups.com> wrote:
Still, It didn't help, I deleted everything related to CDI including the namespace but still, I can see a below error message in the logs in the operator logs and this is only pod I can see under cdi namespace.

{"level":"error","ts":"2023-10-06T15:34:29Z","logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found",

Turns out that is not a terminal error and is actually expected when reconciling the CDI resource the first time.  Can you post the rest of the cdi-operator log? 

Ranjeet Kharade IN

unread,
Oct 6, 2023, 11:52:21 AM10/6/23
to kubevirt-dev
Please find logs :

k logs -f cdi-operator-6c789c4bc5-crwlx -n cdi
{"level":"info","ts":1696606132.0133839,"logger":"cmd","msg":"Go Version: go1.17.10"}
{"level":"info","ts":1696606132.0134609,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
I1006 15:28:53.065388       1 request.go:665] Waited for 1.042165527s due to client-side throttling, not priority and fairness, request: GET:https://100.80.0.1:443/apis/operators.coreos.com/v1?timeout=32s
{"level":"info","ts":1696606137.7224708,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1696606137.7227542,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1696606137.724144,"logger":"cdi-operator","msg":"","VARS":"{OperatorVersion:v1.49.0 ControllerImage:quay.io/kubevirt/cdi-controller:v1.49.0 DeployClusterResources:true ImporterImage:quay.io/kubevirt/cdi-importer:v1.49.0 ClonerImage:quay.io/kubevirt/cdi-cloner:v1.49.0 APIServerImage:quay.io/kubevirt/cdi-apiserver:v1.49.0 UploadProxyImage:quay.io/kubevirt/cdi-uploadproxy:v1.49.0 UploadServerImage:quay.io/kubevirt/cdi-uploadserver:v1.49.0 Verbosity:1 PullPolicy:IfNotPresent PriorityClassName: Namespace:cdi InfraNodePlacement:0xc000860cc0}"}
{"level":"info","ts":1696606137.7326996,"logger":"cdi-operator","msg":"Unable to get controller reference, using namespace"}
{"level":"info","ts":1696606137.73275,"logger":"cmd","msg":"Starting the Manager."}
{"level":"info","ts":1696606137.7329695,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
I1006 15:28:57.733032       1 leaderelection.go:248] attempting to acquire leader lease cdi/cdi-operator-leader-election-helper...
I1006 15:28:57.747635       1 leaderelection.go:258] successfully acquired lease cdi/cdi-operator-leader-election-helper
{"level":"info","ts":1696606137.7478657,"logger":"controller.cdi-operator-controller","msg":"Starting EventSource","source":"kind source: *v1beta1.CDI"}
{"level":"info","ts":1696606137.747935,"logger":"controller.cdi-operator-controller","msg":"Starting Controller"}
{"level":"info","ts":1696606137.849034,"logger":"controller.cdi-operator-controller","msg":"Starting workers","worker count":1}
{"level":"info","ts":1696606174.5901086,"logger":"cdi-operator","msg":"Reconciling CDI","Request.Namespace":"","Request.Name":"cdi"}
{"level":"error","ts":1696606174.8160288,"logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found","stacktrace":"kubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.getAPIServerCABundle\n\tpkg/operator/resources/cluster/apiserver.go:542\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createAPIService\n\tpkg/operator/resources/cluster/apiserver.go:175\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createDynamicAPIServerResources\n\tpkg/operator/resources/cluster/apiserver.go:51\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createResourceGroup\n\tpkg/operator/resources/cluster/factory.go:102\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createAllResources\n\tpkg/operator/resources/cluster/factory.go:88\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.CreateAllDynamicResources\n\tpkg/operator/resources/cluster/factory.go:77\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).GetAllResources\n\tpkg/operator/controller/cr-manager.go:126\nkubevirt.io/containerized-data-importer/vendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler.(*Reconciler).WatchDependantResources\n\tvendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler/reconciler.go:433\nkubevirt.io/containerized-data-importer/vendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler.(*Reconciler).Reconcile\n\tvendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler/reconciler.go:129\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).Reconcile\n\tpkg/operator/controller/controller.go:236\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"}

Michael Henriksen

unread,
Oct 6, 2023, 12:00:21 PM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
Nothing else in the log?  Should continue to reconcile and fill up the log.  Error is not returned.  See:



Ranjeet Kharade IN

unread,
Oct 6, 2023, 12:04:26 PM10/6/23
to kubevirt-dev
Found one more thing in the logs but it get's printed as a Info logs

{"level":"error","ts":1696608122.4285352,"logger":"cdi-operator","msg":"error getting apiserver ca bundle","error":"ConfigMap \"cdi-apiserver-signer-bundle\" not found","stacktrace":"kubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.getAPIServerCABundle\n\tpkg/operator/resources/cluster/apiserver.go:542\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createDataImportCronValidatingWebhook\n\tpkg/operator/resources/cluster/apiserver.go:244\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createDynamicAPIServerResources\n\tpkg/operator/resources/cluster/apiserver.go:57\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createResourceGroup\n\tpkg/operator/resources/cluster/factory.go:102\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.createAllResources\n\tpkg/operator/resources/cluster/factory.go:88\nkubevirt.io/containerized-data-importer/pkg/operator/resources/cluster.CreateAllDynamicResources\n\tpkg/operator/resources/cluster/factory.go:77\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).GetAllResources\n\tpkg/operator/controller/cr-manager.go:126\nkubevirt.io/containerized-data-importer/vendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler.(*Reconciler).CheckForOrphans\n\tvendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler/reconciler.go:363\nkubevirt.io/containerized-data-importer/vendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler.(*Reconciler).Reconcile\n\tvendor/kubevirt.io/controller-lifecycle-operator-sdk/pkg/sdk/reconciler/reconciler.go:152\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).Reconcile\n\tpkg/operator/controller/controller.go:236\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1696608122.4288533,"logger":"cdi-operator","msg":"Orphan object exists","Request.Namespace":"","Request.Name":"cdi","obj":{"apiVersion":"apiextensions.k8s.io/v1","kind":"CustomResourceDefinition","name":"datavolumes.cdi.kubevirt.io"}}

Michael Henriksen

unread,
Oct 6, 2023, 12:13:19 PM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
Looks like CDI was not cleanly uninstalled before.  You should manually delete cluster scoped CDI resources like crds, apiservices, validatingwebhookconfigurations, mutatingwebhookconfigurations.  Can do something like the following to get resources to delete:

k get crd | grep cdi

Michael Henriksen

unread,
Oct 6, 2023, 12:16:05 PM10/6/23
to Ranjeet Kharade IN, kubevirt-dev

Zang Li

unread,
Oct 6, 2023, 1:35:51 PM10/6/23
to Michael Henriksen, Ranjeet Kharade IN, kubevirt-dev
In terms of the original question, we hit this issue as well. The reason is you are using ReadWriteMany but your storage class has "  Fs Group Policy:     ReadWriteOnceWithFSType". 
This means it does not honor the fsGroupPolicy configured when attaching the volume in RWX mode, i.e. the volume is attached as in the root group.  But CDI importer pod by default now uses rootless mode, and has an user ID 107. Therefore you got the permission error. You can verify this by creating a volume, and attaching it to a pod with the same fsGroupPolicy as in importer pod. You can then check the permission of the attached volume.  I didn't find a way to work around this, so I ended up falling back to root mode (by modifying the source code. I don't see a feature gate for it). I am using k8s 1.28 + CDI 1.57.0. 

Best,
Zang




Ranjeet Kharade IN

unread,
Oct 6, 2023, 1:54:12 PM10/6/23
to kubevirt-dev
Thanks, Zang for informing that you also faced a similar issue.

Where did you modify the code to support this?

Zang Li

unread,
Oct 6, 2023, 2:10:01 PM10/6/23
to Ranjeet Kharade IN, kubevirt-dev
Hi Ranjeet, 
        I changed the RunAsNonRoot to false, and commented out the line after it at here 
You can also double check with your storage provider to see if they can support fsGroupPolicy for RWX.

Best,
Zang

Alexander Wels

unread,
Oct 6, 2023, 2:36:34 PM10/6/23
to Zang Li, Ranjeet Kharade IN, kubevirt-dev
Two things,

If it is important that you run as root for the import due to let's say your storage not supporting fsGroups, I would open up a feature request against CDI so we can implement some kind of flag or feature gate to enable it, so you don't have to maintain a fork.
Secondly if your kubernetes cluster has Pod Security Admission enabled, it will start rejecting or at least warning you about the pod running as root. This is why we modified CDI to run rootless.

Zang Li

unread,
Oct 6, 2023, 2:39:38 PM10/6/23
to Alexander Wels, Ranjeet Kharade IN, kubevirt-dev
True that it is best to have a feature gate for this - kubevirt does have a feature gate to disable rootless mode. 

Alexander Wels

unread,
Oct 9, 2023, 7:57:06 AM10/9/23
to Zang Li, Ranjeet Kharade IN, kubevirt-dev
I have opened a GH issue feature request to add a featureGate or flag [0]

Alexander

Ranjeet Kharade

unread,
Oct 9, 2023, 8:46:21 AM10/9/23
to Alexander Wels, Zang Li, kubevirt-dev

I appreciate the assistance, Alexander and Zang.

 

This feature request would be valuable if you're utilizing storage that lacks support for fsGroups.

 

For the time being, I have used CDI version v1.49.0 and it unblocked me to upload the data in PVC.

 

But now I am facing a similar permission issue with virt-launcher-vmi-windows pod while creating a VM.

 

I have enabled featureGate as well still getting permission denied error for path  - /var/run/kubevirt-private/vmi-disks/pvcdisk/disk.im

Below are the logs from virt-launcher-vmi pods:

{"component":"virt-launcher","level":"info","msg":"Successfully connected to domain notify socket at /var/run/kubevirt/domain-notify-pipe.sock","pos":"client.go:170","timestamp":"2023-10-09T11:51:03.305591Z"}

{"component":"virt-launcher","level":"info","msg":"Domain name event: kwp_vmi-windows","pos":"client.go:423","timestamp":"2023-10-09T11:51:03.308164Z"}

{"component":"virt-launcher","kind":"","level":"info","msg":"Domain defined.","name":"vmi-windows","namespace":"kwp","pos":"manager.go:1002","timestamp":"2023-10-09T11:51:03.308580Z","uid":"21fb09de-e083-4635-a76b-b542faaed437"}

{"component":"virt-launcher","level":"info","msg":"DomainLifecycle event Domain event=\"defined\" detail=\"updated\" with event id 0 reason 1 received","pos":"client.go:465","timestamp":"2023-10-09T11:51:03.308593Z"}

{"component":"virt-launcher","level":"info","msg":"Monitoring loop: rate 1s start timeout 5m57s","pos":"monitor.go:181","timestamp":"2023-10-09T11:51:03.308951Z"}

{"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Shutoff(5):Unknown(0)","pos":"client.go:296","timestamp":"2023-10-09T11:51:03.309867Z"}

{"component":"virt-launcher","level":"info","msg":"Domain name event: kwp_vmi-windows","pos":"client.go:423","timestamp":"2023-10-09T11:51:03.310710Z"}

{"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 90 with status 0","pos":"virt-launcher-monitor.go:125","timestamp":"2023-10-09T11:51:03.900040Z"}

{"component":"virt-launcher","level":"error","msg":"At least one cgroup controller is required: No such device or address","pos":"virCgroupDetectControllers:451","subcomponent":"libvirt","thread":"34","timestamp":"2023-10-09T11:51:03.911000Z"}

{"component":"virt-launcher","level":"error","msg":"Unable to read from monitor: Connection reset by peer","pos":"qemuMonitorIORead:423","subcomponent":"libvirt","thread":"107","timestamp":"2023-10-09T11:51:03.926000Z"}

{"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 106 with status 256","pos":"virt-launcher-monitor.go:125","timestamp":"2023-10-09T11:51:03.926665Z"}

{"component":"virt-launcher","level":"error","msg":"internal error: qemu unexpectedly closed the monitor: 2023-10-09T11:51:03.925088Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img\",\"node-name\":\"libvirt-2-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}: Could not open '/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img': Permission denied","pos":"qemuProcessReportLogError:1971","subcomponent":"libvirt","thread":"107","timestamp":"2023-10-09T11:51:03.926000Z"}


spec:

  certificateRotateStrategy: {}

  configuration:

    developerConfiguration:

      featureGates:

      - VMExport

      - ExperimentalVirtiofsSupport

      - Root

 

Events:

  Type     Reason            Age                   From                       Message

  ----     ------            ----                  ----                       -------

  Normal   SuccessfulCreate  6m10s                 virtualmachine-controller  Created virtual machine pod virt-launcher-vmi-windows-z76z6

  Warning  SyncFailed        6m5s                  virt-handler               server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2023-10-09T11:19:21.848810Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img\",\"node-name\":\"libvirt-2-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}: Could not open '/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img': Permission denied')"

 

Am I missing something here ?

 

Thanks,

Ranjeet.

From: Alexander Wels <aw...@redhat.com>
Sent: Monday, October 9, 2023 5:27 PM
To: Zang Li <zan...@google.com>
Cc: Ranjeet Kharade <rkha...@nvidia.com>; kubevirt-dev <kubevi...@googlegroups.com>
Subject: Re: [kubevirt-dev] Re: Virtctl image-upload throwing Permission denied error for path /data/disk.img

 

External email: Use caution opening links or attachments

Ranjeet Kharade

unread,
Oct 10, 2023, 12:30:46 PM10/10/23
to Alexander Wels, Zang Li, kubevirt-dev

Any idea about this error?

 

Is there any additional steps which I am missing apart from enable featureGate?

Alex Kalenyuk

unread,
Oct 10, 2023, 12:40:22 PM10/10/23
to kubevirt-dev
Maybe worth checking 'grep DENIED /var/log/audit/audit.log' or something along those lines on the worker node
Message has been deleted

Ranjeet Kharade

unread,
Oct 11, 2023, 1:08:42 PM10/11/23
to Alex Kalenyuk, kubevirt-dev

I don’t see anything as a DENIED in the logs, Even I have disabled the AppArmor on my worker nodes.

 

I can access this mentioned path(var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img) through exec but don’t know why it is reporting an Access Denied error.

 

k logs -f virt-launcher-vmi-windows-mhrs5

 

{"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:76","timestamp":"2023-10-11T16:55:02.127342Z"}

{"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:79","timestamp":"2023-10-11T16:55:02.128570Z"}

{"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu:///system","pos":"libvirt.go:496","timestamp":"2023-10-11T16:55:02.129629Z"}

{"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon failed: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')","pos":"libvirt.go:504","timestamp":"2023-10-11T16:55:02.129943Z"}

{"component":"virt-launcher","level":"error","msg":"At least one cgroup controller is required: No such device or address","pos":"virCgroupDetectControllers:455","subcomponent":"libvirt","thread":"33","timestamp":"2023-10-11T16:55:04.592000Z"}

{"component":"virt-launcher","level":"info","msg":"Monitoring loop: rate 1s start timeout 4m39s","pos":"monitor.go:180","timestamp":"2023-10-11T16:55:04.594693Z"}

{"component":"virt-launcher","level":"error","msg":"Unable to read from monitor: Connection reset by peer","pos":"qemuMonitorIORead:460","subcomponent":"libvirt","thread":"101","timestamp":"2023-10-11T16:55:04.606000Z"}

{"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 100 with status 256","pos":"virt-launcher-monitor.go:125","timestamp":"2023-10-11T16:55:04.606756Z"}

{"component":"virt-launcher","level":"error","msg":"internal error: qemu unexpectedly closed the monitor: 2023-10-11T16:55:04.605379Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img\",\"node-name\":\"libvirt-2-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}: Could not open '/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img': Permission denied","pos":"qemuProcessReportLogError:2051","subcomponent":"libvirt","thread":"101","timestamp":"2023-10-11T16:55:04.606000Z"}

{"component":"virt-launcher","level":"error","msg":"internal error: process exited while connecting to monitor: 2023-10-11T16:55:04.605379Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img\",\"node-name\":\"libvirt-2-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}: Could not open '/var/run/kubevirt-private/vmi-disks/pvcdisk/disk.img': Permission denied","pos":"qemuProcessReportLogError:2051","subcomponent":"libvirt","thread":"33","timestamp":"2023-10-11T16:55:04.607000Z"}

{"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 97 with status 0","pos":"virt-launcher-monitor.go:125","timestamp":"2023-10-11T16:55:04.610567Z"}

--
You received this message because you are subscribed to a topic in the Google Groups "kubevirt-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubevirt-dev/72ksU4Nqg3M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/42c2aacd-4355-4196-a978-8cb3f61be941n%40googlegroups.com.

Alex Kalenyuk

unread,
Oct 12, 2023, 4:27:11 AM10/12/23
to kubevirt-dev
Yeah definitely weird. Would you be able to join our Slack channel? https://kubernetes.slack.com/?redir=%2Farchives%2FC0163DT0R8X
We could do some on the fly debugging and engage some more folks as needed
Reply all
Reply to author
Forward
0 new messages