The mount code (pkg/util/mount
) is reasonably well tested but brings in the entire Kubernetes source tree when CSI plugins have to depend on it. With a few exceptions, most of the mount code could be refactored to be a minimal set of utilities that can be layered and make it easier to develop CSI plugins. I noticed that quite a few CSI plugins take a dependency on k/k just to get that one package.
Suggestions:
pkg/util/mount
into a minimal dependency library and move it out of k/k (perhaps into staging and have it published out)k8s.io/apimachinery/pkg/util/sets
so the library is easily usable.In general, there is a lot of boiler plate for CSI, and pkg/util/mount is a significant portion of reuse.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
If there no one was working on this, I can help do it. :-)
/assign
@wglian is this something you are still looking at? It's come up in sig-storage, and is on the planning spreadsheet for 1.12. It's something I'd like to work on as well, so want to see where you are at with it.
While it didn't get any traction, I will point out that when me and some coworkers wrote several CSI drivers to the 0.1 version of the spec, we saw this problem, and factored this code out into a project that eventually landed at https://github.com/akutz/gofsutil. It looks quite a bit different now, because we found the k/k code didn't handle bind-mounts of block devices correctly, which was something new to CSI and K8s at the time.
@cofyc did some work recently in pkg/util/mount to detect bind mounts better.
What I would like to see is this handled in at least 3 steps:
@msau42 Awesome. I plan to start working on this now, and i agree with the refactor approach.
I do wonder where the resulting library should end up? Would kubernetes/utils be the right place?, rather than kubernetes/kubernetes-sigs?
k/utils seems odd to me, though, as one goal is to not have to vendor a bunch of code you don't care about. but, k/utils has the intent of holding many different projects in it. So if you want one thing out of it, you have to vendor all of k/utils. I like seeing things in separate repos. =)
Location doesn't hold up the refactoring, however, so I'll work on that!
Can it just be a standalone package, like kubernetes/utils/mount?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
/remove-kind bug
/kind cleanup
Hi @msau42,
Now that #68513 is merged, I wanted to confirm that the next step here would be to move pkg/util/mount
to be a package in k8s.io/utils
.
Yup!
@codenrhoden let me know if you need a hand with git surgery
@dims @msau42 The git-foo wasn't too bad. Just slow to filter on the whole k/k repo. =)
I've opened the PR at kubernetes/utils#100, feel free to take a look.
Once that is in, we'll need to be cognizant to not letting any fixes come into k/k/pkg/util/mount, otherwise they might get lost. Once the package goes into k8s.io/utils, I can open the PR on k/k to make use of it!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
/remove-lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.