kind: PersistentVolume
apiVersion: v1
metadata:
name: hostpath
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
reclaimPolicy:
- Retain
hostPath:
path: "/tmp/data1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.hostpath.csi/node
operator: In
values:
- node-name <----- replace with the node name where you want to share the path.
--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/dd6faf01-021f-4bf0-831d-909fb461e22en%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/18233372-cadc-4302-9d70-d12498691e14n%40googlegroups.com.
Hi,So the example I gave was for a manually provisioned hostpath volume that was preconfigured for a particular host. What you are doing is trying to use immediate binding with a provisioner that is designed to only work with WaitForFirstConsumer. WaitForFirstConsumer is designed to allow the kubernetes scheduled maximum flexibility when scheduling workloads. It doesn't commit to having the storage on a particular node until the workload is scheduled and it can use the properties of the workload to schedule instead of the properties of the storage.Now I guess for your purposes you already know which node you want the PV on, so it is likely simpler to just create the PV yourself instead of asking a provisioner to do it for you. Also note any dynamically provisioned hostpath volume will create a directory on some predefined path, and point the hostpath PV to that created directory instead of one you specify. This is what the kubevirt hostpath provisioner does, and it is also what the rancher local path provisioner does. Not sure if that is what you want.A few more things to note. You do not NEED a storage class when manually provisioning PVs, kubernetes will match PVs and PVCs without storage classes on their properties. If you do want a storage class to group things and identify them more easily you can create a storage class with no provisioner something like this:apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example-storage-class
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: DeleteNow in general it is not a great idea to use hostpath PVs for security reasons (pods now have access to the host filesystem). And I don't quite understand what you are trying to do, but it appears you are mixing dynamic provisioner (rancher local-path) with a manual PV and PVC creation. You should pick one or the other depending on why you need hostpath volumes, and how they are populated.
This was very helpful and I got to the point where I have a PV, PVC bound and the VM bound to the PVC. However, I don't see any documentation on how to access the hostpath inside of the VM.
In the below vmi output I don't see a target specified under the Volume Status.
mount -t virtiofs <fsname> /mnt```
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/9b8f35fa-d349-4959-b0f8-a6e2c95d8321n%40googlegroups.com.
Awesome Roman, this was the last step needed and now it works thanks!Also thank you Alexander, I understand this a lot better. I've now deployed the kubevirt hostpath-provisioner and using your instructions created a PV/PVC that also shares the host's folder. Is this a safer provisioner than the rancher local-path or does it also have the same security issues?
The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.
The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/6e5ce48e-6e8d-49c6-99c4-ee7fc13a2594n%40googlegroups.com.
The plan is to share a single folder locally using virtiofs since that has a lot better performance for pods/VM using fuse. I also plan to use the hostpath-provisioner to create dynamic storage using a different PVC.How does this volume get populated? With both hostpath provisioner and local-path the provisioner creates a directory which is then made available to the pods/vms (vms are just processes in pods). If you have some external process that puts data in a particular directory on your host, I don't see how either one can work for you as they will create directories for you. That is why I was talking about manually created PVs, since you can control the path you put in there. A PVC is just some meta data detailing a user asking for some storage, a PV is just some meta data explaining to kubernetes how to make some piece of storage available.It is a path located on the host hard drive and is populated via an external process. Since using the suggested PV and adding the local: -> path: directive, the hostpath provisioner doesn't populate the path, it is now using that path as the shared path to the VM via virtiofs. So that works as desired.The next step is to set the shared path readonly for the VM. I could make it readonly inside the guest VM by applying the -r mount option 'mount -r -t virtiofs content /mnt/content' but I'd rather have it readonly when the VM is bound to the PVC. I haven't found a way to accomplish this and mounting the entire device on the host readonly isn't feasible either.When you manually create a PV you can give it the ROX (ReadOnlyMany) access mode, then when you create a PVC you can give it the same access mode and when they get bound they should be readonly. When using one of the dynamic provisioners, you create the PVC with ROX, and it should create a matching PV that is also ROX. But the problem with that is, how do you populate a read only volume?Yes I used ReadOnlyMany as the accessMode for the PV and PVC, but it appears to not honor that. I can still write to that shared path from the guest. Is it the case that the ReadOnlyMany doesn't actually prevent the path from being written to it only is an indicator of how it should be accessed?
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/e2df7c30-37ef-4b53-843e-48c22b2f76ban%40googlegroups.com.
So sounds to me that you are not actually using the hostpath provisioner for your shared volume. The point of the provisioner is that it creates the PV for you when the user requests a PVC. If you create the PV ahead of time, then the provisioner does not do it. Which is fine, it would work just as well without the hostpath provisioner. For making it read only. If the hostpath PV does not honor the read only access mode, when you populate the shared directory with the outside process you can always bind mount that directory somewhere else and put the read only flag on the bind mount command, and point the path to the new bind mounted location. This is essentially how the host path csi driver honors the read only access mode. There is no such code in the legacy hostpath driver.I previously tried to bind mount it ro, but the guest is still able to write to that path. The only way I've had it not have access to write to that path is if I mount the virtiofs device readonly inside the guest. I'm a bit surprised it didn't honor the ro bind mount. FYI I am using the kubevirt CSI hostpath driver.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/eded3fb8-ba9a-46cb-8abf-7c606b9c9077n%40googlegroups.com.