Is this a BUG REPORT or FEATURE REQUEST?:
@kubernetes/sig-storage-bugs
What happened:
Normally this isn't a problem, but if you use subpath with hostpath volumes, then that means the subpath mounts will not get cleaned up during the reconstruction window (a pod is force deleted while kubelet is down)
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Noticed this issue while working on #61373
How is this different than #61372?
#61372 affects reconstruction for PVCs
We are upgraded to the last stable version 1.9.7.
Now, is not possible mount/inject only one file without affected other files in a directory?
If I use subPath with fileName "caused \"not a directory\"" error is raised.
There are no solutions for this problem jet?. I'm reading a lot of related issues and try different examples with no success.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
We intend to address this by redesigning the volume reconstruction feature to use kubelet checkpointing.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Closed #61446.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
/reopen
/lifecycle frozen
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Reopened #61446.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
@msau42: Reopened this issue.
In response to this:
/reopen
/lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Better practice:
subpath
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
/assign
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The in-tree hostPath driver makes no attempt to unmount:
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/pkg/volume/hostpath/host_path.go#L266-L274
Once that is fixed, these skips need to be removed:
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/test/e2e/storage/testsuites/subpath.go#L347-L350
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/test/e2e/storage/testsuites/subpath.go#L359-L362
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The in-tree hostPath driver makes no attempt to unmount
So it's not really the TearDown code I quoted above, the subpaths would normally be cleaned up here:
https://github.com/kubernetes/kubernetes/blob/2e55595d3baeedcab09745355824f38a60cf6d08/pkg/volume/util/operationexecutor/operation_generator.go#L825-L830
But we never make any attempt to call UnmountVolume in this scenario.
https://github.com/kubernetes/kubernetes/blob/2e55595d3baeedcab09745355824f38a60cf6d08/pkg/kubelet/volumemanager/reconciler/reconciler.go#L185-L187
unmountVolumes
loops over rc.actualStateOfWorld.GetAllMountedVolumes()
, but if we're just using a hostPath (type = Directory), there won't be a mount point, and it won't be listed as a mounted volume. But the subpaths still use bind mounts, and those never get unmounted.
For future reference it can be reproduced manually with:
kind: Pod
apiVersion: v1
metadata:
name: my-intree-inline-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: my-inline-vol
- mountPath: "/data/subpath1"
name: my-inline-vol
subPath: subpath1
- mountPath: "/data/subpath2"
name: my-inline-vol
subPath: subpath2
command: [ "sleep", "1000000" ]
volumes:
- name: my-inline-vol
hostPath:
path: /tmp/dir1
type: Directory
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.