StatefulSet + StorageClass + CephFS Plugin

678 views
Skip to first unread message

Wei Jin

unread,
Jun 19, 2017, 12:07:47 AM6/19/17
to kubernetes-sig-storage
Hi, list,

I tried StatefulSet + StorageClass + Ceph RBD,it's ok. And each replica in this set has its own pvc & pv and both of them can be created dynamically.
However, when I tried it with CephFS, it failed...

Can CephFS be used in StorageClass? And is it reasonable? And can CephFS be used in StatefulSet, what's the expected behavior? Also is it reasonable?
Will all replicas in the set share the filesystem?

As far as I know, StatefulSet is perfect for the case that each pod has its own persistent storage volume.
But for other cases, like pods(whether or not they belong to the same service) sharing volume,  RC + CephFS or NAS is the best choice?

Wei Jin

unread,
Jun 19, 2017, 2:04:08 AM6/19/17
to kubernetes-sig-storage
I looked through in-tree cephfs plugin and found there is no provision mechanism.
That might be the answer.

But I still have a doubt: what's the purpose for dynamic provisioning a filesystem? What's the best practice for pods sharing volumes?
Thanks.

Saad Ali

unread,
Jun 19, 2017, 12:54:42 PM6/19/17
to Wei Jin, Huamin Chen, Matthew Wong, kubernetes-sig-storage
> Can CephFS be used in StorageClass? 

The CephFS volume plugin does not implement the provisioner interface, therefore it does not support dynamic provisioning (or storage classes) out of the box. That said, as you've noticed, there is an external provisioner that can be used to enable these features.

> And can CephFS be used in StatefulSet, what's the expected behavior? 

StatefulSets depend on dynamic provisioning. So as long as you set up your storage class to point to the external provisioner it should work.

> Will all replicas in the set share the filesystem?

I'm not sure how the external CephFS provisioner is implemented. +Matt Wong and Humain Chen who may know more.

> As far as I know, StatefulSet is perfect for the case that each pod has its own persistent storage volume.

​Correct.

> But for other cases, like pods(whether or not they belong to the same service) sharing volume,  RC + CephFS or NAS is the best choice?

If you want each pod to share the same storage instance then yes.​

But I still have a doubt: what's the purpose for dynamic provisioning a filesystem? What's the best practice for pods sharing volumes?

+Matt Wong and Humain Chen who may know more.



--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-storage@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-storage/3fdd5916-eab3-43ed-a621-6a3f86624b4c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Huamin Chen

unread,
Jun 19, 2017, 1:04:13 PM6/19/17
to Saad Ali, Wei Jin, Matthew Wong, kubernetes-sig-storage

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage+unsubscri...@googlegroups.com.

To post to this group, send email to kubernetes-sig-storage@googlegroups.com.

Wei Jin

unread,
Jun 20, 2017, 7:57:22 AM6/20/17
to kubernetes-sig-storage
Thank you, guys.

Does  this provisioner support a different cephfs mount point, not always the root directory?
In fact, it's not safe for different services mounting the same whole filesystem.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-...@googlegroups.com.

hc...@redhat.com

unread,
Jun 23, 2017, 4:17:55 PM6/23/17
to kubernetes-sig-storage
Yes, each CephFS share will be at different directories and accessible by different Ceph client.

Wei Jin

unread,
Jul 4, 2017, 9:46:34 AM7/4/17
to kubernetes-sig-storage
Hi, I tried the external cephfs plugin today, but got some errors.

1) First, I manually ran the provisioner with docker run command but without parameter '-kubeconfig=/kube/config' because I have no config file.

And then created pvc, pv can be created automatically.
After that, I created a pod and tested the pv, it failed with write error. (cd mount_point and then touch foo)
Errors like: "touch: cannot touch 'foo': Input/output error"

I am not sure whether it is related to kubeconfig because I ignored it when starting.
I also set readonly to false in pod.yaml file.


2) Second , I ran the provisioner with the provided deployment.yaml.
But got erros like: chmod: invalid mode: 'x+o' "

The error is from command 'kubectl logs', I think it might be the reason of Dockerfile.

Anyone could give some clues about "input/output error"?

Thanks.

Wei Jin

unread,
Jul 6, 2017, 11:51:55 AM7/6/17
to kubernetes-sig-storage
I look through the ceph_volume_client library and find that it added namespace to volumes.
And unfortunatelly, my cephfs kernel client doesn't support it.
So it triggers errors: "input/output error"
Thanks.
Message has been deleted

tommy xiao

unread,
Jul 4, 2019, 5:34:54 AM7/4/19
to piero.c...@gmail.com, kubernetes-sig-storage
you can use hook to manage it.

On Thu, Jul 4, 2019 at 4:31 PM <piero.c...@gmail.com> wrote:
I had the same problem with the external provisioner and I solved by reading this issue:

You have to change the deployment definition by adding in the args:
- '-disable-ceph-namespace-isolation=true'
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-st...@googlegroups.com.

To post to this group, send email to kubernetes-...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-st...@googlegroups.com.

To post to this group, send email to kubernetes-...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


--
Deshi Xiao
Twitter: xds2000
E-mail: xiaods(AT)gmail.com
Reply all
Reply to author
Forward
0 new messages