Hi everybody !!
I asked an help also in
in StackOverflow : https://stackoverflow.com/questions/77437162/kubernetes-statefulset-volumemounts-of-various-provisioners
in Slack sig-storage Channel : https://kubernetes.slack.com/archives/C09QZFCE5/p1699349200555969
but, till now, no help at all...
I'm struggling to understand the right way to specify the name of volumeMounts in statefulset.yaml
I'm trying to deploy Cassandra Stateful App (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) but, clearly, I'm making some mistakes
This is my `Kubernetes Cluster`:
root@k8s-eu-1-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-eu-1-master Ready control-plane 41h v1.28.2
k8s-eu-1-worker-1 Ready <none> 41h v1.28.2
k8s-eu-1-worker-2 Ready <none> 41h v1.28.2
k8s-eu-1-worker-3 Ready <none> 41h v1.28.2
k8s-eu-1-worker-4 Ready <none> 41h v1.28.2
k8s-eu-1-worker-5 Ready <none> 41h v1.28.2
with `nfs` shared folders:
root@k8s-eu-1-master:~# df -h | grep /srv/
xx.xxx.xxx.xxx:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
yy.yyy.yyy.yyy:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
zz.zzz.zzz.zz:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
pp.ppp.ppp.pp:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
qq.qqq.qqq.qqq:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-1-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=xx.xxx.xxx.xxx \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-1 \
> --set storageClass.name=k8s-eu-1-worker-1 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-1
NAME: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:28:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-2-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=yy.yyy.yyy.yyy \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-2 \
> --set storageClass.name=k8s-eu-1-worker-2 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-2
NAME: k8s-eu-1-worker-2-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:31:15 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-3-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=zz.zzz.zzz.zz \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-3 \
> --set storageClass.name=k8s-eu-1-worker-3 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-3
NAME: k8s-eu-1-worker-3-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:39:25 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-4-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=pp.ppp.ppp.pp \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-4 \
> --set storageClass.name=k8s-eu-1-worker-4 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-4
NAME: k8s-eu-1-worker-4-nfs-subdir-external-provisioner
LAST DEPLOYED: Tue Nov 7 08:25:33 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-5-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=qq.qqq.qqq.qqq \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-5 \
> --set storageClass.name=k8s-eu-1-worker-5 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-5
NAME: k8s-eu-1-worker-5-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:49:21 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner 1/1 1 1 85m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner 1/1 1 1 16h
root@k8s-eu-1-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8dx8f4j 1/1 Running 0 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner-ffdfb98dk9mrw 1/1 Running 0 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner-7c9797c8jpzkv 1/1 Running 0 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner-6bd84f54b2xx2 1/1 Running 0 86m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner-84976cd7lttsn 1/1 Running 0 16h
Based on `Persistent Volumes`, `PersistentVolumeClaims`, and the `shared nfs folders` I showed above, what do I have to specify as `volumeMounts` `names` and `mountPaths`?
For example, this is the `pod` created by the `provisioner` of the `worker-1` :
root@k8s-eu-1-master:~# kubectl describe pod k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh
Name: k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh
Namespace: default
Priority: 0
Service Account: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Node: k8s-eu-1-worker-2/yy.yyy.yyy.yyy
Start Time: Tue, 07 Nov 2023 13:46:04 +0100
Labels: app=nfs-subdir-external-provisioner
pod-template-hash=74787c8d8b
release=k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Annotations: cni.projectcalico.org/containerID: b87d543f81fb00cae352e05e205bb6477405e816ea0e386217a9a5c95dcf2193
cni.projectcalico.org/podIP: 192.168.236.14/32
cni.projectcalico.org/podIPs: 192.168.236.14/32
Status: Running
IP: 192.168.236.14
IPs:
IP: 192.168.236.14
Controlled By: ReplicaSet/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8d8b
Containers:
nfs-subdir-external-provisioner:
Container ID: containerd://3292b89c024a7efaada811cba01132f22235fc962e10d9b8988b534a9a76914e
Image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Image ID: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 07 Nov 2023 13:46:05 +0100
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/k8s-eu-1-worker-1
NFS_SERVER: xx.xxx.xxx.xxx
NFS_PATH: /srv/shared-k8s-eu-1-worker-1
Mounts:
/persistentvolumes from nfs-subdir-external-provisioner-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gpbqt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-subdir-external-provisioner-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: xx.xxx.xxx.xxx
Path: /srv/shared-k8s-eu-1-worker-1
ReadOnly: false
kube-api-access-gpbqt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned default/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh to k8s-eu-1-worker-2
Normal Pulled 22m kubelet Container image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
Normal Created 22m kubelet Created container nfs-subdir-external-provisioner
Normal Started 22m kubelet Started container nfs-subdir-external-provisioner
Which is the `stateful pod volume name` as said by https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/cassandra/cassandra-statefulset.yaml : ` # These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.` ?
I tried with `"/persistentvolumes"` but I've got the same error
Looking forward to your kind help
Raphael