Getting started on CSI driver

103 views
Skip to first unread message

sandee...@gmail.com

unread,
Oct 1, 2018, 4:20:24 PM10/1/18
to kubernetes-sig-storage-wg-csi
Hi,

We are getting started on building a CSI driver for latest version of vSphere storage solution. I am looking for the following info:
  1. Spec/doc that explain all components/controllers involved in k8s and CSI driver. I would like to understand how the volume create/attach/detach/delete works through various components.
  2. Sample CSI driver code I can use as a reference to build the CSI driver.
  3. Any existing tests that validates the CSI driver. I can use this to test the driver we are building.
  4. Supported CSI driver deployment models
  5. Spec/doc/guide that explains upgrading in-tree volume provisioners to CSI driver. (I guess this should be cloud provider specific, but just want to know the challenges we will run into)
Could you help me get pointers to above? If there is some more docs/specs I need to be aware of before starting on CSI driver, please send pointers to them too.

Btw, I have gone through the CSI specification and I'm aware of some CSI drivers.

Thanks,
Sandeep

David Zhu

unread,
Oct 1, 2018, 4:54:16 PM10/1/18
to sandee...@gmail.com, kubernetes-sig...@googlegroups.com
Hi Sandeep,

I have done my best to answer your questions inline.

  1. Spec/doc that explain all components/controllers involved in k8s and CSI driver. I would like to understand how the volume create/attach/detach/delete works through various components.
Here is a Kubernetes CSI Design Doc that details how Kubernetes interacts with CSI: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md
  1. Sample CSI driver code I can use as a reference to build the CSI driver.
You can check out this for an (almost) beta driver that implements most new features of CSI including snapshots + topology: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver
  1. Any existing tests that validates the CSI driver. I can use this to test the driver we are building.
We have some CSI-Sanity tests: https://github.com/kubernetes-csi/csi-test
these act like a smoke test for the driver and can check basic sanity of the driver.
You could also add integration tests here with Kubernetes: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/storage/csi_volumes.go
  1. Supported CSI driver deployment models
Not 100% sure what this means, might be covered under the Kubernetes CSI Design Doc. Let me know if you have further questions on this.
  1. Spec/doc/guide that explains upgrading in-tree volume provisioners to CSI driver. (I guess this should be cloud provider specific, but just want to know the challenges we will run into)

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage-wg-csi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-storage-wg-csi/b6ec0119-e2fa-4f36-9d1c-80f842d4f22c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--

David Zhu | Software Engineer | dy...@google.com | 412-436-6859

sandee...@gmail.com

unread,
Oct 2, 2018, 11:31:24 PM10/2/18
to kubernetes-sig-storage-wg-csi
Thanks David!
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage-wg-csi+unsub...@googlegroups.com.

sandee...@gmail.com

unread,
Oct 18, 2018, 5:58:03 PM10/18/18
to kubernetes-sig-storage-wg-csi
Hi,

I was reading the CSI design doc and I have the following questions:
  1. Does CSI support static provisioning(volume already provisioned out of band)?
  2. What are the use cases for CSI to support secrets in various APIs? Assuming that the creds to provision the volume is same, how is using secrets for every CSI API better than a solution where volume driver is configured with specific a secret having necessary creds?
  3. How is NodeStageVolume/NodeUnstageVolume different to NodePublishVolume/NodeUnpublishVolume in terms of functionality?
  4. In-tree volume provisioners can implement  Attacher.VolumesAreAttached and BulkVolumeVerifier to make sure that the volumes are periodically monitored and reconciled. I don't see such a support in CSI. Does CSI support monitoring/reconciling volumes?
Thanks,
Sandeep

Jan Safranek

unread,
Oct 19, 2018, 3:41:38 AM10/19/18
to kubernetes-sig...@googlegroups.com, sandee...@gmail.com
On 18/10/2018 23:58, sandee...@gmail.com wrote:
> Hi,
>
> I was reading the CSI design doc
> <https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md> and
> I have the following questions:
>
> 1. Does CSI support static provisioning(volume already provisioned out
> of band)?

Yes, there is nothing special in CSI. Just create PV with
CSIPersistentVolumeSource with the fields that your CSI driver understands.

> 2. What are the use cases for CSI to support secrets in various APIs?
> Assuming that the creds to provision the volume is same, how is
> using secrets for every CSI API better than a solution where volume
> driver is configured with specific a secret having necessary creds?

If you create your PVs manually, they can refer to
ControllerPublishSecretRef, NodeStageSecretRef and NodePublishSecretRef
that are given to CSI driver in those particular calls. I don't think
that Kubernetes can do the same with dynamically provisioned volumes. It
is expected there that the driver itself has necessary secrets to attach
/ mount volumes it provisioned.

> 3. How is NodeStageVolume/NodeUnstageVolume different to
> NodePublishVolume/NodeUnpublishVolume in terms of functionality?

They behave differently when multiple pods on the same node use the same
volume. NodeStage is called once and it should mount the volume to
given "global" directory. NodePublish is then called separately for each
pod, possibly with different options (e.g. some pods have the volume
read-only). Typically, NodePublish should bind-mount the volume from the
"global" directory to given "pod" directory.

NodeStage typically applies to volumes backed by a block device - you
can mount it just once (NodeStage) and then you just bind-mount it to
individual pods (NodePublish). NodeUnstage will be called only when all
pods are gone, so you don't need to track the mounts in the driver. For
NFS, you're fine with just NodePublish.


> 4. In-tree volume provisioners can implement 
> Attacher.VolumesAreAttached and BulkVolumeVerifier to make sure that
> the volumes are periodically monitored and reconciled. I don't see
> such a support in CSI. Does CSI support monitoring/reconciling volumes?

Not yet. We assume that the driver itself keeps the volumes attached.
There is no way how a driver would tell Kubernetes that something is not
attached (mounted, ...) any longer.

Patrick Ohly

unread,
Oct 19, 2018, 4:01:29 AM10/19/18
to sandee...@gmail.com, kubernetes-sig-storage-wg-csi
<sandee...@gmail.com> writes:

> Hi,
>
> I was reading the CSI design doc
> <https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md> and
> I have the following questions:
>
> 1. Does CSI support static provisioning(volume already provisioned out
> of band)?

In theory it should be possible to create a PV manually that has the
necessary attributes, and then let the CSI driver do the attach/publish
part. But I think someone tried about a month ago and didn't succeed, so
there might still be some open issues.

> 2. What are the use cases for CSI to support secrets in various APIs?
> Assuming that the creds to provision the volume is same, how is using
> secrets for every CSI API better than a solution where volume driver is
> configured with specific a secret having necessary creds?

The CSI driver(s) in a cluster typically get deployed by an
administrator once for different users. Then users of the cluster can
create their own StorageClass where they link to their own, personal
keys for volume provisioning and usage.

For example, ceph-csi does it like that:
- "admin" part:
https://github.com/ceph/ceph-csi/tree/master/deploy/rbd/kubernetes
- "user" part:
https://github.com/ceph/ceph-csi/tree/master/examples/rbd

> 3. How is NodeStageVolume/NodeUnstageVolume different to
> NodePublishVolume/NodeUnpublishVolume in terms of functionality?

My understanding is that NodeStageVolume is meant to be called only once
per node, even when later multiple pods using that volume get started on
that node. NodePublishVolume then gets called once for each pod. But
because NodeStageVolume must be idempotent, it has to tolerate being
called once for each pod. The bigger advantage is probably for
NodeUnstageVolume, because when that gets called, the CSI driver can be
sure that the last pod using the volume is gone.

But I could be wrong. I've not seen a CSI driver which actually splits
its node operations into NodeStageVolume and NodePublishVolume. There's
also no E2E test which schedules two active pods to the same node at the
same time.

> 4. In-tree volume provisioners can implement Attacher.VolumesAreAttached
> and BulkVolumeVerifier to make sure that the volumes are periodically
> monitored and reconciled. I don't see such a support in CSI. Does CSI
> support monitoring/reconciling volumes?

I don't know about that - probably not.

Bye, Patrick
Reply all
Reply to author
Forward
0 new messages