From Docker docs:
If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume myvol2 into /app/ in the container.
$ docker run -d \ -it \ --name devtest \ -v myvol2:/app \ nginx:latest
Use docker inspect devtest to verify that the volume was created and mounted correctly. Look for the Mounts section:
"Mounts": [ { "Type": "volume", "Name": "myvol2", "Source": "/var/lib/docker/volumes/myvol2/_data", "Destination": "/app", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],
This shows that the mount is a volume, it shows the correct source and destination, and that the mount is read-write.
If the docker volume is not backed by a docker volume driver then docker creates a directory on the host machine to back the volume.
Kubernetes is implementing a mechanism to control the amount of host disk space a container can use. See the Local Ephemeral Storage Resource Management
design in kubernetes/community#991.
Purpose of this bug is to make sure that Kubernetes accounts for docker volumes backed by host disk as part of ephemeral-storage
.
/assign @jingxu97
CC @Random-Liu @kubernetes/sig-storage-bugs @kubernetes/sig-node-bugs
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
[MILESTONENOTIFIER] Milestone Labels Complete
Issue label settings:
@saad-ali This is not a regression. Do we want this be a 1.8 issue?
@saad-ali This is not a regression. Do we want this be a 1.8 issue?
+1. Doesn't seem like a 1.8 issue.
My understanding is that the ephemeral-storage
limit is being introduced in 1.8? @jingxu97?
Also ref kubernetes-incubator/cri-containerd#186 (comment).
We don't use docker volume in Kubernetes. However, Dockerfile does support VOLUME
, user could create VOLUME
and write data into it in the Dockerfile. When docker runs the container, it will:
I believe there are users relying on this behavior, at least, mongo:latest
and the microservices demo do reply on the VOLUME
.
Actually, not only Docker image. Volumes
is also supported in OCI image spec 1.0. It means that all OCI runtimes also need to support it.
Today kubelet is not aware of these host volumes, and not doing accounting for it. It means that application could use arbitrarily use host disk and not being monitored.
Volumes
in Kubernetes.Seems like option 2 is a better near term solution; And option 4/5 is better long term solution.
Hostpath volumes are not supported because of the inability to identify sharing patterns. Docker volumes described in this context is hostpath volumes to me. @jingxu97 correct me if I got it wrong.
Hostpath volumes are not supported because of the inability to identify sharing patterns. Docker volumes described in this context is hostpath volumes to me. @jingxu97 correct me if I got it wrong.
k8s doesn't allow sharing of this types of volumes -- it doesn't pass -v
when creating a container.
This means that so far these volumes are just private, per-container volumes created by docker and will be destroyed along with the container. This is similar to the emptyDir volumes in kubernetes. It's not that useful unless we explicitly support docker volumes.
The volume section in the OCI image spec is very vague. It seems like this should be the user's choice to decide what kind of volumes to create for the container instance at run time.
I like Option 4 but it seems less viable at this moment. How about a variant of Option 3?
After pulling down the image, kubelet can inspect the image to discover these "volumes". It can then create emptyDir or host volumes based on the default policy or pod spec. Next, it can pass these "special mounts" to the runtime via CRI and runtime would be responsible for copying the data if necessary.
We still have the problem of recording the volumes and surfacing them properly.
I like Option 4 but it seems less viable at this moment. How about a variant of Option 3?
After pulling down the image, kubelet can inspect the image to discover these "volumes". It can then create emptyDir or host volumes based on the default policy or pod spec. Next, it can pass these "special mounts" to the runtime via CRI and runtime would be responsible for copying the data if necessary.
We still have the problem of recording the volumes and surfacing them properly.
I also prefer letting kubelet to make the decision, so that user could define what the underlying storage should be for the image defined volumes. By default we could use EmptyDir
, and could define policy for image defined Volumes.
For other images which don't care about this, they could just return empty list.
My understanding is that the ephemeral-storage limit is being introduced in 1.8? @jingxu97?
Is this feature alpha or beta in 1.8? I think it's ok not to account these volumes for an alpha feature even if we do decide to do so.
I'd suggest punting this to the next release or later given that the volumes are not well-supported in kubernetes.
[MILESTONENOTIFIER] Milestone Labels Complete
Issue label settings:
My understanding is that the ephemeral-storage limit is being introduced in 1.8? @jingxu97?
Is this feature alpha or beta in 1.8? I think it's ok not to account these volumes for an alpha feature even if we do decide to do so.
I'd suggest punting this to the next release or later given that the volumes are not well-supported in kubernetes.
It is alpha so I'm ok with that. Removing from 1.8 milestone.
hostpath is by definition is persistent volume which has different lifecyle of pod. The docker volumes also exist outside the lifecycle of a given container, so I think they are like hostpath volumes. For our current local ephemeral storage isolation feature, we do not management hostpath persistent volumes.
For persistent volume, we suggest to use new feature of local volume which is normally set up on a separate partition for easy management. If user choose the share local volume on root file system, we cannot manage the use of the local volumes at least for now.
The docker volumes also exist outside the lifecycle of a given container, so I think they are like hostpath volumes. For our current local ephemeral storage isolation feature, we do not management hostpath persistent volumes.
Today we practically treat them as private volumes of the container -- there is no sharing and the volume gets removed along with the container. As mentioned #52032 (comment), there is no way for pods/containers to share these volumes using kubernetes API.
Whether we want to support these volumes and how to support them is a separate question.
hostpath is by definition is persistent volume which has different lifecyle of pod. The docker volumes also exist outside the lifecycle of a given container, so I think they are like hostpath volumes.
That is not the case for image specified volumes. For image specified volume, docker creates it when creating the container, and delete it when deleting the container (Kubelet chooses to do that specifying RemoveVolumes
)
So I believe the image volumes are still in the scope of Local Ephemeral Storage Resource Management
. However, we could not manage it before we change CRI and kubelet, so I don't think we could not fix this in 1.8.
However, we may want to revisit this in 1.9.
fwiw, this has been a discussion for us recently.
for clusters where we needed to control this, we can optionally run the following:
https://github.com/projectatomic/docker-novolume-plugin
but its aggressive in that it denies the ability to start a container that uses an image volume.
for cri-o, we want the runtime to support "ignore" image volumes, which i think is the right long term behavior for kubernetes.
see --image-volumes parameter for configuring cri-o daemon.
for cri-o, we want the runtime to support "ignore" image volumes, which i think is the right long term behavior for kubernetes.
The problem is that there are images relying on the image volume, e.g.:
Given so, we may still need to support this unless we could tell user not to use this.
Since image volume is part of OCI image spec now, we may want to properly support it. @yujuhong 's comment #52032 (comment) sounds like a good option to me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #52032.
@Random-Liu @saad-ali
Seems this feature is still not include into ephemeral-storage feature. Any plan to enhance this?
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-ephemeral-storage-limits-run
/reopen
/remove-lifecycle rotten
@Random-Liu: Reopened this issue.
In response to this:
/reopen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Reopened #52032.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Closed #52032.
/reopen
@wu0407: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@Random-Liu any updates ?