While not 1.10 specific, if this is deemed not just a doc's issue and is not a "working as intended" but rather a data consistency issue, @kubernetes/sig-storage-bugs please mark this asap if you deem it a release blocker for 1.10.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
I'm very curious about this because the docs mention nodes, but don't mention pods at all. So it seems like this is working as intended to me, but I'd really like to know if the behavior is going to change on us.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
ReplicaSet can't ensure the at-most-one guarantee we wish.
In order to preserve the availability of ReplicaSet's pods, ReplicaSet will recreate a new pod when it found an old pod in the termination/deleting process.
But, you know, all operations are asynchronize. A pod in the termination/deleting process would continue to run for an arbitrary amount of time. As a result of that, it very likely to happen that multiple copies of one pod are running concurrently.
Deployment, building atop ReplicaSet, can't offer the at-most-one guarantee too.
AccessModes as defined today, only describe node attach (not pod mount) semantics, and doesn't enforce anything.
@orainxiong StatefulSet pods may be better in this regard. Because the pods are recreated with the same stable name, a replacement pod cannot be created until the old one is deleted. However, you could still do things like pod force delete, which won't wait for volumes to gracefully unmount from a node.
Thanks for the discussion here, I recently thought there was a bug in the documentation as well since it talks about nodes instead of pods for Access Modes.
What concerns me the most here is that from a user perspective it is all presented in an abstracted way between the Pod spec, PVC, and PV that makes it look like it’s all pod level, but it’s not.
In this secnario, if a PV has and accessMode: ReadWriteOnce and it is bound to a PVC and there is currently one Pod bound to the PVC. When a second pod tries to bind to the PVC, I would expect that pod creation to fail/error at that point, even before it may get scheduled to the same node.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #60903.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten
@kubanczyk: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Had a similar question which I have posted here https://stackoverflow.com/questions/56592929/how-pods-are-able-to-mount-the-same-pvc-with-readwriteonce-access-mode-when-stor
Had the same question.i can create different pod in different node with the same pvc.But i thought this should be error because i had set the accessmode as ReadWriteOnce.Can someboby tell me if this is a bug or i have a missunderstand.
[root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat mongodb-pv-nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs02 spec: capacity: storage: 200Mi accessModes: - ReadWriteOnce nfs: server: 10.46.178.117 path: "/home/lzx/nfs/nfs01" [root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat mongodb-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongodb-pvc spec: resources: requests: storage: 200Mi accessModes: - ReadWriteOnce storageClassName: "" [root@controller213:~/kubernetes-in-action-master/Chapter06]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongodb-pvc Bound nfs01 200Mi RWO 1h [root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat busybox-rc.yml apiVersion: v1 kind: ReplicationController metadata: name: busybox spec: replicas: 3 selector: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: 10.46.177.91:5000/busybox:latest volumeMounts: - name: mongodb-data mountPath: /data/busybox command: - sleep - "3600" volumes: - name: mongodb-data persistentVolumeClaim: claimName: mongodb-pvc [root@controller213:~/kubernetes-in-action-master/Chapter06]$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE busybox-bzlkj 1/1 Running 0 59m 172.22.22.216 10.46.179.213 busybox-hqnpn 1/1 Running 1 1h 172.22.55.217 10.46.179.218 busybox-n78tm 1/1 Running 1 1h 172.22.22.193 10.46.179.213 busybox-ncx2z 1/1 Running 0 59m 172.22.154.134 10.46.179.214 busybox-v6tck 1/1 Running 1 1h 172.22.154.133 10.46.179.214 busybox-wzrtf 1/1 Running 0 59m 172.22.55.194 10.46.179.218
/reopen
@zouyee: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reopened #60903.
So the question, is it about two nodes can't access a same PV or two pods can't access a same PV with RWO? Or both? Came into the same question.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.![]()
/remove-lifecycle stale
Is there a way to prevent two pods mount the same PVC even if they are scheduled to be run on the same node?
The answer to my own question is pod anti-affinity. It is not the same as not to mount one volume to 2 pods scheduled on the same node. But anti-affinity can be used to ask scheduler not to run 2 pods on the same node. Therefore it prevents mounting one volume into 2 pods.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #60903.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce, We should at least update the docs and add an explanation of the PV lifecycle and emphasizing on that the binding process if on the node level and multiple pods which land on the same node can indeed write to a PV with ReadWriteOnce Access Mode.
@0xMH: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce, We should at least update the docs and add an explanation of the PV lifecycle and emphasizing on that the binding process if on the node level and multiple pods which land on the same node can indeed write to a PV with
ReadWriteOnceAccess Mode.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
These ticket bots are really annoying.
Just ran into this on our infra. I have some deployments with pods which are using RWO claims. I'd like to attach a pod to the same PVC to "load it up with data"... The language in the documentation makes this seem possible...
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
The most comment reason to run into RWO issues is always around provisioning data into them. The "delete a PVC and ask questions later" model of volumes, mixed with the common inability to "reattach" dynamic PV claims means people are often looking to restore data in a hurry.
In theory, it should be possible to run multiple pods on a single PVC within the same pool node. Deployment, or otherwise.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.![]()
On going work to improve access modes: take a look at https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/
See feature issue with link to KEP here kubernetes/enhancements#2485
It's still in alpha so if there are changes/improvements you want to see provide feedback there or better yet, come join SIG Storage and help us move it beta/GA.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.![]()
@0xMH wrote
I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce
The blog linked by @saad-ali above i.e. https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/ clearly states
The ReadWriteOnce access mode restricts volume access to a single node, which means it is possible for multiple pods on the same node to read from and write to the same volume.
However, ...
We should at least update the docs (...) if multiple pods which land on the same node can indeed write to a PV with
ReadWriteOnceAccess Mode.
current documentation on ReadWriteOnce still leaves room for misinterpretation of the access in "multiple pods to access the volume" bit.
It happens Kubernetes content creators make mistake sharing incorrect or inaccurate information.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()