[kubernetes/kubernetes] Hostpath doesn't support reconstruction (#61446)

1 view
Skip to first unread message

Michelle Au

unread,
Mar 20, 2018, 9:16:54 PM3/20/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Is this a BUG REPORT or FEATURE REQUEST?:
@kubernetes/sig-storage-bugs

What happened:
Normally this isn't a problem, but if you use subpath with hostpath volumes, then that means the subpath mounts will not get cleaned up during the reconstruction window (a pod is force deleted while kubelet is down)


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Michelle Au

unread,
Mar 20, 2018, 9:17:24 PM3/20/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Noticed this issue while working on #61373

Jordan Liggitt

unread,
Mar 21, 2018, 7:05:30 PM3/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

How is this different than #61372?

Michelle Au

unread,
Mar 21, 2018, 7:50:11 PM3/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

#61372 affects reconstruction for PVCs

Víctor Rubiella Monfort

unread,
May 8, 2018, 2:25:05 AM5/8/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

We are upgraded to the last stable version 1.9.7.

Now, is not possible mount/inject only one file without affected other files in a directory?

If I use subPath with fileName "caused \"not a directory\"" error is raised.

There are no solutions for this problem jet?. I'm reading a lot of related issues and try different examples with no success.

fejta-bot

unread,
Aug 6, 2018, 2:29:38 AM8/6/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Michelle Au

unread,
Aug 6, 2018, 2:08:53 PM8/6/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Nov 4, 2018, 1:35:36 PM11/4/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Michelle Au

unread,
Nov 5, 2018, 8:18:54 PM11/5/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/remove-lifecycle stale

We intend to address this by redesigning the volume reconstruction feature to use kubelet checkpointing.

fejta-bot

unread,
Feb 3, 2019, 9:19:33 PM2/3/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Mar 5, 2019, 9:38:34 PM3/5/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Kubernetes Prow Robot

unread,
Apr 4, 2019, 11:23:20 PM4/4/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #61446.

Kubernetes Prow Robot

unread,
Apr 4, 2019, 11:23:21 PM4/4/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

fejta-bot

unread,
Apr 4, 2019, 11:23:39 PM4/4/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Michelle Au

unread,
Dec 12, 2019, 10:25:20 AM12/12/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/reopen
/lifecycle frozen


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Kubernetes Prow Robot

unread,
Dec 12, 2019, 10:25:22 AM12/12/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Reopened #61446.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Kubernetes Prow Robot

unread,
Dec 12, 2019, 10:25:22 AM12/12/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@msau42: Reopened this issue.

In response to this:

/reopen
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Neujie

unread,
Dec 19, 2019, 8:56:50 PM12/19/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Better practice:

  • kill or delete pod
  • clear the pod's all subpath
    otherwise a lot of useless dir will be left


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Jonathan Dobson

unread,
Mar 24, 2022, 8:12:28 PM3/24/22
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/assign


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/61446/1078522373@github.com>

Jonathan Dobson

unread,
Mar 29, 2022, 1:31:18 PM3/29/22
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The in-tree hostPath driver makes no attempt to unmount:
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/pkg/volume/hostpath/host_path.go#L266-L274
Once that is fixed, these skips need to be removed:
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/test/e2e/storage/testsuites/subpath.go#L347-L350
https://github.com/kubernetes/kubernetes/blob/6c96ac04ff970809d815c871958be702d3065db1/test/e2e/storage/testsuites/subpath.go#L359-L362


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/61446/1082171561@github.com>

Jonathan Dobson

unread,
Mar 30, 2022, 12:59:06 PM3/30/22
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The in-tree hostPath driver makes no attempt to unmount

So it's not really the TearDown code I quoted above, the subpaths would normally be cleaned up here:
https://github.com/kubernetes/kubernetes/blob/2e55595d3baeedcab09745355824f38a60cf6d08/pkg/volume/util/operationexecutor/operation_generator.go#L825-L830

But we never make any attempt to call UnmountVolume in this scenario.
https://github.com/kubernetes/kubernetes/blob/2e55595d3baeedcab09745355824f38a60cf6d08/pkg/kubelet/volumemanager/reconciler/reconciler.go#L185-L187

unmountVolumes loops over rc.actualStateOfWorld.GetAllMountedVolumes(), but if we're just using a hostPath (type = Directory), there won't be a mount point, and it won't be listed as a mounted volume. But the subpaths still use bind mounts, and those never get unmounted.

For future reference it can be reproduced manually with:

kind: Pod
apiVersion: v1
metadata:
  name: my-intree-inline-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-inline-vol
      - mountPath: "/data/subpath1"
        name: my-inline-vol
        subPath: subpath1
      - mountPath: "/data/subpath2"
        name: my-inline-vol
        subPath: subpath2
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-inline-vol
      hostPath:
        path: /tmp/dir1
        type: Directory
  1. create pod with spec above, check mountpoints
  2. kill kubelet
  3. force delete pod
  4. start kubelet again
  5. check mount points and kubelet.log


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/61446/1083389147@github.com>

Reply all
Reply to author
Forward
0 new messages