Deprecation of intree CephFS driver in 1.28

155 views
Skip to first unread message

Humble Devassy Chirammal

unread,
Apr 26, 2023, 12:32:10 PM4/26/23
to dev, kubernetes-...@googlegroups.com
Hey Team,

As we all know, there is an effort in the Kubernetes community to remove in-tree storage plugins to reduce external dependencies and security concerns in the core Kubernetes. CSI plugin has been the dominant and proper way to go. Thus, we are in a process to gradually deprecate all the in-tree external storage plugins and eventually remove them from the core Kubernetes codebase.  

We would like to bring into your notice that, we are planning to deprecate CephFS in-tree driver ( provisioner: kubernetes.io/cephfs ) in 1.28 release and planning to take out the code from Kubernetes Code base in subsequent release.

Since the introduction of this driver, it provided very limited functionality ( only inline volume source mounting capability ) and have been pretty much stale for many releases wrt development and usage in a cluster. Also, the CSI version of this driver is available for long time now.  In last few releases, we have been experimenting CSI migration path of this driver, however it seems that,  its not a worth effort due to few reasons like, lack of in-tree driver users, the limited functionality it provide, also due to some  changes in the  CSI driver wrt to in-tree spec..etc.  Additionally, we haven’t got any feedback/interest from community to maintain this driver in our attempts to gather feedback in the past.

Here we are proposing  deprecation and removal of CephFS in-tree driver from our code base. Preferably deprecation in 1.28 and removal in upcoming releases. However we would like to get your feedback on this plan and would like to consider/revisit the removal version accordingly.

If you are using CephFS in-tree driver in your cluster setup, please reply with  below info before 10-May-2023 which can help us to make a decision on when to completely remove this code base from the repo.

- what version of Kubernetes you are running in your setup ?
- how often do you upgrade your cluster?
- what vendor or distro you are using ? Is it any (downstream) product offering or upstream CephFS driver directly used in your setup?

Awaiting your feedback.

Thanks,
Humble
Reply all
Reply to author
Forward
0 new messages