How can I share a host directory with a VM?

1,079 views
Skip to first unread message

Jacoby Hickerson

unread,
Nov 7, 2021, 7:29:30 PM11/7/21
to kubevirt-dev
Hello everybody,

I'd like to share a readonly path with a VM and have found a couple of links that seem to point to that capability with kubevirt, but haven't found an example.

I see that virtio-fs allows for this via qemu:
https://libvirt.org/kbase/virtiofs.html

And that it has been enabled at least experimentally in kubevirt:
https://github.com/kubevirt/kubevirt/pull/3493

But I'm not quite sure how to share a host path with the VM.  For example I'd like to share a host path to the VM path: /mnt/drive/media -> /mnt/media

# Example of a vm deployment being used
---
# Source: vm/templates/clusterNamespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: vm
---
# Source: vm/templates/kubevirt.yaml
kind: VirtualMachine
metadata:
  labels:
    kubevirt.io/vm: vm
    helm.sh/chart: app-vm-0.1.3
    app.kubernetes.io/name: vm-release
    app.kubernetes.io/instance: release-test
  name: vm
  namespace: vm
spec:
  runStrategy: "Always"
  template:
    metadata:
      labels:
        kubevirt.io/vm: client-vm
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: sata
            name: datavolumedisk1
          - disk:
              bus: virtio
            name: disk
            bootOrder: 1
        machine:
          type:
        resources:
          requests:
            memory: 1Gi
      terminationGracePeriodSeconds: 0

      volumes:
      - emptyDisk:
          capacity: 512M
        name: datavolumedisk1
      - name: disk
        containerDisk:
          image: registry.io:5000/app-vm:latest

Thanks for any help with this much appreciated!
Jacoby

Alexander Wels

unread,
Nov 8, 2021, 8:51:56 AM11/8/21
to Jacoby Hickerson, kubevirt-dev
You can make a hostpath based PV, and then a PVC that binds to it, then you can use that PVC as the volume used for virtio-fs

kind: PersistentVolume
apiVersion: v1
metadata:
  name: hostpath
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  reclaimPolicy:
    - Retain
  hostPath:
    path: "/tmp/data1"
 nodeAffinity:    required:      nodeSelectorTerms:      - matchExpressions:        - key: topology.hostpath.csi/node          operator: In          values:          - node-name <----- replace with the node name where you want to share the path.

Note that the storage capacity value doesn't really matter, except when matching a PVC to be bound, the capacity must be >= request size of the PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

That should match the PV above, and be bound. Then you can use the PVC as a volume source in your VM, and use virtio-fs to share it with the VM. Note you won't be able to migrate the VM due to it using a hostpath volume.
--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/dd6faf01-021f-4bf0-831d-909fb461e22en%40googlegroups.com.

Jacoby Hickerson

unread,
Nov 8, 2021, 1:56:39 PM11/8/21
to kubevirt-dev
This was very helpful thank you!  

I think I'm almost there, I enabled the feature gate, setup my VM config to point to the PVC, created the PV and PVC. However, the PVC is not being provisioned:
k describe pvc vm-hostpath
Name:          vm-hostpath
Namespace:     default
StorageClass:  local-path
Status:        Pending
Volume:
Labels:        <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                     From                                                                                                Message
  ----     ------                ----                    ----                                                                                                -------
  Normal   ExternalProvisioning  2m32s (x26 over 8m30s)  persistentvolume-controller                                                                         waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
  Normal   Provisioning          45s (x6 over 8m30s)     rancher.io/local-path_local-path-provisioner-5ff76fc89d-vt5wv_fc12a4ea-b296-444e-9978-9c6f6fcb60f5  External provisioner is provisioning volume for claim "default/vm-hostpath"
  Warning  ProvisioningFailed    45s (x6 over 8m30s)     rancher.io/local-path_local-path-provisioner-5ff76fc89d-vt5wv_fc12a4ea-b296-444e-9978-9c6f6fcb60f5  failed to provision volume with StorageClass "local-path": configuration error, no node was specified

I also updated the sc to bind Immediate
k get sc local-path
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          Immediate           false                  34m

Not quite sure where I specify the node for the PVC or SC. 
k describe pv vm-hostpath
Name:              vm-hostpath
Labels:            <none>
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:
Status:            Available
Claim:
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:
  Required Terms:
    Term 0:        topology.hostpath.csi/node in [node1]
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/data1
    HostPathType:
Events:            <none>

Thanks for any additional help!

Alexander Wels

unread,
Nov 8, 2021, 2:28:13 PM11/8/21
to Jacoby Hickerson, kubevirt-dev
Hi,

So the example I gave was for a manually provisioned hostpath volume that was preconfigured for a particular host. What you are doing is trying to use immediate binding with a provisioner that is designed to only work with WaitForFirstConsumer. WaitForFirstConsumer is designed to allow the kubernetes scheduled maximum flexibility when scheduling workloads. It doesn't commit to having the storage on a particular node until the workload is scheduled and it can use the properties of the workload to schedule instead of the properties of the storage.

Now I guess for your purposes you already know which node you want the PV on, so it is likely simpler to just create the PV yourself instead of asking a provisioner to do it for you. Also note any dynamically provisioned hostpath volume will create a directory on some predefined path, and point the hostpath PV to that created directory instead of one you specify. This is what the kubevirt hostpath provisioner does, and it is also what the rancher local path provisioner does. Not sure if that is what you want.

A few more things to note. You do not NEED a storage class when manually provisioning PVs, kubernetes will match PVs and PVCs without storage classes on their properties. If you do want a storage class to group things and identify them more easily you can create a storage class with no provisioner something like this:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: example-storage-class
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete

Now in general it is not a great idea to use hostpath PVs for security reasons (pods now have access to the host filesystem). And I don't quite understand what you are trying to do, but it appears you are mixing dynamic provisioner (rancher local-path) with a manual PV and PVC creation. You should pick one or the other depending on why you need hostpath volumes, and how they are populated.

Alexander Wels

unread,
Nov 8, 2021, 2:38:52 PM11/8/21
to Jacoby Hickerson, kubevirt-dev
On Mon, Nov 8, 2021 at 1:27 PM Alexander Wels <aw...@redhat.com> wrote:
Hi,

So the example I gave was for a manually provisioned hostpath volume that was preconfigured for a particular host. What you are doing is trying to use immediate binding with a provisioner that is designed to only work with WaitForFirstConsumer. WaitForFirstConsumer is designed to allow the kubernetes scheduled maximum flexibility when scheduling workloads. It doesn't commit to having the storage on a particular node until the workload is scheduled and it can use the properties of the workload to schedule instead of the properties of the storage.

Now I guess for your purposes you already know which node you want the PV on, so it is likely simpler to just create the PV yourself instead of asking a provisioner to do it for you. Also note any dynamically provisioned hostpath volume will create a directory on some predefined path, and point the hostpath PV to that created directory instead of one you specify. This is what the kubevirt hostpath provisioner does, and it is also what the rancher local path provisioner does. Not sure if that is what you want.

A few more things to note. You do not NEED a storage class when manually provisioning PVs, kubernetes will match PVs and PVCs without storage classes on their properties. If you do want a storage class to group things and identify them more easily you can create a storage class with no provisioner something like this:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: example-storage-class
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete

Now in general it is not a great idea to use hostpath PVs for security reasons (pods now have access to the host filesystem). And I don't quite understand what you are trying to do, but it appears you are mixing dynamic provisioner (rancher local-path) with a manual PV and PVC creation. You should pick one or the other depending on why you need hostpath volumes, and how they are populated.


Totally forgot to mention that if you do use a storage class, you have to put storageClassName in both the PV and PVC that you create.

Jacoby Hickerson

unread,
Nov 10, 2021, 5:57:06 PM11/10/21
to kubevirt-dev
This was very helpful and I got to the point where I have a PV, PVC bound and the VM bound to the PVC.  However, I don't see any documentation on how to access the hostpath inside of the VM. 
In the below vmi output I don't see a target specified under the Volume Status.  
k describe vmi fedora-vm -n fedora
Name:         fedora-vm
Namespace:    fedora
Labels:       kubevirt.io/nodeName=node1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version:  kubevirt.io/v1
Kind:         VirtualMachineInstance
Metadata:
  Creation Timestamp:  2021-11-10T21:15:22Z
  Finalizers:
    foregroundDeleteVirtualMachine
  Generation:  10
  Managed Fields:
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
        f:labels:
          .:
          f:kubevirt.io/nodeName:
          f:kubevirt.io/vm:
        f:ownerReferences:
          .:
          k:{"uid":"4327ce8c-3570-454d-a89a-c96519564e20"}:
            .:
            f:apiVersion:
            f:blockOwnerDeletion:
            f:controller:
            f:kind:
            f:name:
            f:uid:
      f:spec:
        .:
        f:domain:
          .:
          f:devices:
            .:
            f:disks:
            f:filesystems:
          f:firmware:
            .:
            f:uuid:
          f:machine:
            .:
            f:type:
          f:resources:
            .:
            f:requests:
              .:
              f:memory:
        f:terminationGracePeriodSeconds:
        f:volumes:
      f:status:
        .:
        f:activePods:
          .:
          f:c40f85fe-61a0-43e7-8adc-b0a78f9e6da1:
        f:conditions:
        f:guestOSInfo:
        f:launcherContainerImageVersion:
        f:nodeName:
        f:qosClass:
    Manager:      virt-controller
    Operation:    Update
    Time:         2021-11-10T21:20:36Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:interfaces:
        f:migrationMethod:
        f:phase:
        f:volumeStatus:
    Manager:    virt-handler
    Operation:  Update
    Time:       2021-11-10T21:20:37Z
  Owner References:
    API Version:           kubevirt.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  VirtualMachine
    Name:                  fedora-vm
    UID:                   4327ce8c-3570-454d-a89a-c96519564e20
  Resource Version:        3173104
  UID:                     fbb9b0c4-6ae1-4c09-946d-a5d4e8b2b8dd
Spec:
  Domain:
    Cpu:
      Cores:    1
      Sockets:  1
      Threads:  1
    Devices:
      Disks:
        Disk:
          Bus:       sata
        Name:        datavolumedisk1
        Boot Order:  1
        Disk:
          Bus:  virtio
        Name:   fedora-disk
      Filesystems:
        Name:  vm-hostpath
        Virtiofs:
      Interfaces:
        Bridge:
        Name:  default
    Features:
      Acpi:
        Enabled:  true
    Firmware:
      Uuid:  bb0bebea-15a4-55d1-8a61-c5e10c294b1d
    Machine:
      Type:  q35
    Resources:
      Requests:
        Cpu:     100m
        Memory:  1Gi
  Networks:
    Name:  default
    Pod:
  Termination Grace Period Seconds:  0
  Volumes:
    Name:  vm-hostpath
    Persistent Volume Claim:
      Claim Name:  vm-hostpath
    Empty Disk:
      Capacity:  512M
    Name:        datavolumedisk1
    Container Disk:
      Image:              registry.io:5000/apps-vm:latest
      Image Pull Policy:  IfNotPresent
    Name:                 fedora-disk
Status:
  Active Pods:
    c40f85fe-61a0-43e7-8adc-b0a78f9e6da1:  node1
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Message:               cannot migrate VMI: PVC vm-hostpath is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
    Reason:                DisksNotLiveMigratable
    Status:                False
    Type:                  LiveMigratable
    Last Probe Time:       <nil>
    Last Transition Time:  2021-11-10T21:20:33Z
    Status:                True
    Type:                  Ready
  Guest OS Info:
  Interfaces:
    Ip Address:  10.42.0.31
    Ip Addresses:
      10.42.0.31
    Mac:                             ee:13:ec:b3:b7:d2
    Name:                            default
  Launcher Container Image Version:  quay.io/kubevirt/virt-launcher:v0.42.1
  Migration Method:                  BlockMigration
  Node Name:                         node1
  Phase:                             Running
  Qos Class:                         Burstable
  Volume Status:
    Name:    datavolumedisk1
    Target:  sda
    Name:    fedora-disk
    Target:  vda
    Name:    vm-hostpath
    Target:
Events:
  Type    Reason   Age                 From          Message
  ----    ------   ----                ----          -------
  Normal  Started  83m                 virt-handler  VirtualMachineInstance started.
  Normal  Created  23m (x17 over 83m)  virt-handler  VirtualMachineInstance defined.

I updated the above VirtualMachine chart with the below:
    spec:
      domain:
        devices:
          filesystems:
          - name: vm-hostpath
            virtiofs: {}
          disks:
          - disk:
              bus: sata
            name: datavolumedisk1
          - disk:
              bus: virtio
            name: fedora-disk
            bootOrder: 1
        machine:
          type:
        resources:
          requests:
            memory: 1Gi
      terminationGracePeriodSeconds: 0
      volumes:
      - name: vm-hostpath
        persistentVolumeClaim:
          claimName: vm-hostpath
      - emptyDisk:
          capacity: 512M
        name: datavolumedisk1
      - name: fedora-disk
        containerDisk:
          image: registry.io:5000/app-vm:latest

Roman Mohr

unread,
Nov 11, 2021, 7:14:25 AM11/11/21
to Jacoby Hickerson, kubevirt-dev
On Wed, Nov 10, 2021 at 11:57 PM Jacoby Hickerson <hicke...@gmail.com> wrote:
This was very helpful and I got to the point where I have a PV, PVC bound and the VM bound to the PVC.  However, I don't see any documentation on how to access the hostpath inside of the VM. 

You are right, we seem to miss documentation. Thanks for mentioning it.
 
In the below vmi output I don't see a target specified under the Volume Status.  


The last step is to use cloud-init or a manual command to mount the filesystem. This works like described here: https://www.kernel.org/doc/html/latest/filesystems/virtiofs.html.

You basically have to issue a command like this from within the guest:

```
mount -t virtiofs <fsname> /mnt
```
 
where <fsname> is the name of the filesystem on the yaml, in your case `vm-hostpath`.

Best regards,
Roman

Jacoby Hickerson

unread,
Nov 12, 2021, 5:26:25 AM11/12/21
to kubevirt-dev
Awesome Roman, this was the last step needed and now it works thanks!

Also thank you Alexander, I understand this a lot better. I've now deployed the kubevirt hostpath-provisioner and using your instructions created a PV/PVC that also shares the host's folder. Is this a safer provisioner than the rancher local-path or does it also have the same security issues?  The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.

The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.

Alexander Wels

unread,
Nov 12, 2021, 8:36:43 AM11/12/21
to Jacoby Hickerson, kubevirt-dev
On Fri, Nov 12, 2021 at 4:26 AM Jacoby Hickerson <hicke...@gmail.com> wrote:
Awesome Roman, this was the last step needed and now it works thanks!

Also thank you Alexander, I understand this a lot better. I've now deployed the kubevirt hostpath-provisioner and using your instructions created a PV/PVC that also shares the host's folder. Is this a safer provisioner than the rancher local-path or does it also have the same security issues?

The legacy version of the hostpath provisioner is more or less the same as the rancher local-path. It has the same mechanism. There is a base path on the host, and whenever someone requests a PVC, it creates a directory under that base path, then creates a PV and in that PV it points to the directory it just created. From a security standpoint they are both the same. The hostpath provisioner csi driver is a little better. in that it does create the directory, but then bind mounts it into the volumes available to the pod.
 
  The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.

How does this volume get populated? With both hostpath provisioner and local-path the provisioner creates a directory which is then made available to the pods/vms (vms are just processes in pods). If you have some external process that puts data in a particular directory on your host, I don't see how either one can work for you as they will create directories for you. That is why I was talking about manually created PVs, since you can control the path you put in there. A PVC is just some meta data detailing a user asking for some storage, a PV is just some meta data explaining to kubernetes how to make some piece of storage available.
 

The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.

When you manually create a PV you can give it the ROX (ReadOnlyMany) access mode, then when you create a PVC you can give it the same access mode and when they get bound they should be readonly. When using one of the dynamic provisioners, you create the PVC with ROX, and it should create a matching PV that is also ROX. But the problem with that is, how do you populate a read only volume?
 

Jacoby Hickerson

unread,
Nov 15, 2021, 5:17:15 AM11/15/21
to kubevirt-dev
The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.

How does this volume get populated? With both hostpath provisioner and local-path the provisioner creates a directory which is then made available to the pods/vms (vms are just processes in pods). If you have some external process that puts data in a particular directory on your host, I don't see how either one can work for you as they will create directories for you. That is why I was talking about manually created PVs, since you can control the path you put in there. A PVC is just some meta data detailing a user asking for some storage, a PV is just some meta data explaining to kubernetes how to make some piece of storage available.

It is a path located on the host hard drive and is populated via an external process.  Since using the suggested PV and adding the local: -> path: directive, the hostpath provisioner doesn't populate the path, it is now using that path as the shared path to the VM via virtiofs. So that works as desired.

The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.

When you manually create a PV you can give it the ROX (ReadOnlyMany) access mode, then when you create a PVC you can give it the same access mode and when they get bound they should be readonly. When using one of the dynamic provisioners, you create the PVC with ROX, and it should create a matching PV that is also ROX. But the problem with that is, how do you populate a read only volume?

Yes I used ReadOnlyMany as the accessMode for the PV and PVC, but it appears to not honor that.  I can still write to that shared path from the guest. Is it the case that the ReadOnlyMany doesn't actually prevent the path from being written to it only is an indicator of how it should be accessed?

Alexander Wels

unread,
Nov 15, 2021, 12:29:55 PM11/15/21
to Jacoby Hickerson, kubevirt-dev
On Mon, Nov 15, 2021 at 4:17 AM Jacoby Hickerson <hicke...@gmail.com> wrote:
The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.

How does this volume get populated? With both hostpath provisioner and local-path the provisioner creates a directory which is then made available to the pods/vms (vms are just processes in pods). If you have some external process that puts data in a particular directory on your host, I don't see how either one can work for you as they will create directories for you. That is why I was talking about manually created PVs, since you can control the path you put in there. A PVC is just some meta data detailing a user asking for some storage, a PV is just some meta data explaining to kubernetes how to make some piece of storage available.

It is a path located on the host hard drive and is populated via an external process.  Since using the suggested PV and adding the local: -> path: directive, the hostpath provisioner doesn't populate the path, it is now using that path as the shared path to the VM via virtiofs. So that works as desired.

The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.

When you manually create a PV you can give it the ROX (ReadOnlyMany) access mode, then when you create a PVC you can give it the same access mode and when they get bound they should be readonly. When using one of the dynamic provisioners, you create the PVC with ROX, and it should create a matching PV that is also ROX. But the problem with that is, how do you populate a read only volume?

Yes I used ReadOnlyMany as the accessMode for the PV and PVC, but it appears to not honor that.  I can still write to that shared path from the guest. Is it the case that the ReadOnlyMany doesn't actually prevent the path from being written to it only is an indicator of how it should be accessed?


So sounds to me that you are not actually using the hostpath provisioner for your shared volume. The point of the provisioner is that it creates the PV for you when the user requests a PVC. If you create the PV ahead of time, then the provisioner does not do it. Which is fine, it would work just as well without the hostpath provisioner. For making it read only. If the hostpath PV does not honor the read only access mode, when you populate the shared directory with the outside process you can always bind mount that directory somewhere else and put the read only flag on the bind mount command, and point the path to the new bind mounted location. This is essentially how the host path csi driver honors the read only access mode. There is no such code in the legacy hostpath driver.

 

Jacoby Hickerson

unread,
Nov 15, 2021, 12:58:02 PM11/15/21
to kubevirt-dev
So sounds to me that you are not actually using the hostpath provisioner for your shared volume. The point of the provisioner is that it creates the PV for you when the user requests a PVC. If you create the PV ahead of time, then the provisioner does not do it. Which is fine, it would work just as well without the hostpath provisioner. For making it read only. If the hostpath PV does not honor the read only access mode, when you populate the shared directory with the outside process you can always bind mount that directory somewhere else and put the read only flag on the bind mount command, and point the path to the new bind mounted location. This is essentially how the host path csi driver honors the read only access mode. There is no such code in the legacy hostpath driver.

I previously tried to bind mount it ro, but the guest is still able to write to that path. The only way I've had it not have access to write to that path is if I mount the virtiofs device readonly inside the guest. I'm a bit surprised it didn't honor the ro bind mount.  FYI I am using the kubevirt CSI hostpath driver.

Alexander Wels

unread,
Nov 15, 2021, 1:25:12 PM11/15/21
to Jacoby Hickerson, kubevirt-dev
On Mon, Nov 15, 2021 at 11:58 AM Jacoby Hickerson <hicke...@gmail.com> wrote:
So sounds to me that you are not actually using the hostpath provisioner for your shared volume. The point of the provisioner is that it creates the PV for you when the user requests a PVC. If you create the PV ahead of time, then the provisioner does not do it. Which is fine, it would work just as well without the hostpath provisioner. For making it read only. If the hostpath PV does not honor the read only access mode, when you populate the shared directory with the outside process you can always bind mount that directory somewhere else and put the read only flag on the bind mount command, and point the path to the new bind mounted location. This is essentially how the host path csi driver honors the read only access mode. There is no such code in the legacy hostpath driver.

I previously tried to bind mount it ro, but the guest is still able to write to that path. The only way I've had it not have access to write to that path is if I mount the virtiofs device readonly inside the guest. I'm a bit surprised it didn't honor the ro bind mount.  FYI I am using the kubevirt CSI hostpath driver.


Unless you have a really old util-linux the following should make a read only mount of a directory. I verified this locally
[awels@awels ~]$ mkdir test
[awels@awels ~]$ sudo mount --bind -o ro test /mnt
[awels@awels ~]$ cd /mnt
[awels@awels mnt]$ ls
[awels@awels mnt]$ ls -al
total 8
drwxrwxr-x.  2 awels awels 4096 Nov 15 12:10 .
dr-xr-xr-x. 20 root  root  4096 Mar  2  2021 ..
[awels@awels mnt]$ touch bla
touch: cannot touch 'bla': Read-only file system

So on the node, assuming your shared path is /mnt/shared and you have a /mnt/ro directory, you should be able to run mount --bind -o ro /mnt/shared /mnt/ro which should then give a read only version of /mnt/shared in /mnt/ro. Then in the PV you create you point the path to /mnt/ro and set the accessMode to ROX. Then there should be no way the VM is able to write to the hostpath volume since it is a readonly filesystem on the node. Unless there is something happening when the kubelet mounts the directory in the container I am not aware of.

Also again, if you are the one creating the PV, it will not use the hostpath provisioner at all, since the PV already exists (the entire job of the provisioner is to create a PV in response to the user creating a PVC, which is not needed if the PV already exists)

Jacoby Hickerson

unread,
Nov 15, 2021, 2:03:17 PM11/15/21
to kubevirt-dev
I'm using the following util-linux on the host:
rpm -qf /usr/bin/mount
util-linux-2.36.1-1.fc33.x86_64

And the steps you suggested are what I did as well:
# on the host machine
mount -o bind,ro /var/spool/links/disk/content/install/ /mnt/bind/virt/content/
[root@node1 ~]# touch /mnt/bind/virt/content/file
touch: cannot touch '/mnt/bind/virt/content/file': Read-only file system
ls /mnt/bind/virt/content/
3006

# on the guest if I touch a file it will write to that path:
[root@fedora-vm ~]# touch  /mnt/content/file
[root@fedora-vm ~]# ls -l /mnt/content/
total 4
drwxr-xr-x 3  614  201 4096 Nov 11 23:36 3006
-rw-r--r-- 1 root root    0 Nov 15 18:19 file

Also, I believe I need a PVC in order for the kubevirt chart to work with virtiofs? In the volumes: section I only know of persistentVolumeClam:to reference.
volumes:
      - name: content
        persistentVolumeClaim:
          claimName: content

The pv points to /mnt/bind/virt/content:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: content
  labels:
    type: data
spec:
  storageClassName: "hostpath-csi"
  capacity:
    storage: 10Gi
  accessModes:
    - ReadOnlyMany
  local:
    path: "/mnt/bind/virt/content"
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - node1

I plan to use the kubevirt hostpath provisioner to also create a dynamic PVC for scratch non-volatile storage too.
Reply all
Reply to author
Forward
0 new messages