Driver not registered

316 views
Skip to first unread message

Paul Archard

unread,
Aug 30, 2021, 6:45:26 PM8/30/21
to container-storage-interface-community
Hi everyone,

I'm trying to develop a CSI driver for our proprietary storage system. The intention is to use it for pods to mount a filesystem that attaches to some existing back end storage. I'm using a CSI Ephemeral volume for this, so I didn't implement the controller interface at all, but did implement Identity and Node and am using the sidecars for node-driver-registrar and liveness-probe.

I modelled the code on the existing NFS driver with some changes. I can successfully register and run the driver, and "kubectl get csidriver" shows the driver correctly with the name, e.g. "x.y.com". Also the pod is running correctly and the logs show that the registrar has the correct name is listening for connections.

 However, when I try to bring up a pod using the driver I get an error - MountVolume.SetUp failed for volume "view" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name x.y.com not found in the list of registered CSI drivers.

My pod descriptor looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: some-pod
spec:
  containers:
    - name: fs-test
      image: alpine:latest
      volumeMounts:
        - name: view
          mountPath: /mnt
  volumes:
    - name: view
      csi:
        driver: x.y.com
        # Passed as NodePublishVolumeRequest.volume_context,
        # valid options depend on the driver.
        volumeAttributes:
          server: myservice.svc

I also tried this with the NFS driver, but changed the manifest to say Ephemeral instead of Persistent, and I get a similar error when I try to use that one.

Any help would be very much appreciated, or if I'm missing required information I'd be happy to provide that.

Thanks in advance,
Paul

thonic

unread,
Aug 31, 2021, 6:59:21 AM8/31/21
to Paul Archard, container-storage-interface-community
Hi Paul,

Maybe the reason is that there is no node-driver pod running on the node where the cosuming pod named 'some-pod' will run. You may need to check the dispatched node of some-pod, and then check if there is node-driver pod running on it.

I mean, for example, you executed `kubectl get csidriver` at node-A who had node-driver pod running, while the some-pod was dispatched to node-B without any node-driver pods.

I used to meet a similar problem, and finally find out the driver was only associated with kubelet instead of the whole cluster.


Thonic


------------------ Original ------------------
From: "Paul Archard" <parc...@gmail.com>;
Date: Tue, Aug 31, 2021 06:45 AM
To: "container-storage-interface-community"<container-storage-...@googlegroups.com>;
Subject: Driver not registered
--
You received this message because you are subscribed to the Google Groups "container-storage-interface-community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to container-storage-interf...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/container-storage-interface-community/5bcefcd5-b524-40e9-ba62-ece930ca34e0n%40googlegroups.com.

Paul Archard

unread,
Aug 31, 2021, 1:46:16 PM8/31/21
to container-storage-interface-community
Just to follow up on this in case it's useful to anyone else - the problem turned out to be due to an incorrect directory for the unix-domain sockets. I was using /var/lib/kubelet as the base directory however that was the wrong place. I'm using microk8s on Ubuntu and it stores its runtime information in /var/snap/microk8s/common/var/lib/kubelet. Setting that volume path correctly seems to have fixed the problem.

Thanks,
Paul

Reply all
Reply to author
Forward
0 new messages