Re: [kubernetes/kubernetes] Multiple pods able to write to PV with accessMode of ReadWriteOnce (#60903)

98 views
Skip to first unread message

Tim Pepper

unread,
Mar 9, 2018, 1:26:53 PM3/9/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

While not 1.10 specific, if this is deemed not just a doc's issue and is not a "working as intended" but rather a data consistency issue, @kubernetes/sig-storage-bugs please mark this asap if you deem it a release blocker for 1.10.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Justin

unread,
Mar 22, 2018, 2:14:23 AM3/22/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I'm very curious about this because the docs mention nodes, but don't mention pods at all. So it seems like this is working as intended to me, but I'd really like to know if the behavior is going to change on us.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

    ReadWriteOnce – the volume can be mounted as read-write by a single node
    ReadOnlyMany – the volume can be mounted read-only by many nodes
    ReadWriteMany – the volume can be mounted as read-write by many nodes

orain

unread,
Mar 28, 2018, 7:44:49 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

ReplicaSet can't ensure the at-most-one guarantee we wish.

In order to preserve the availability of ReplicaSet's pods, ReplicaSet will recreate a new pod when it found an old pod in the termination/deleting process.

But, you know, all operations are asynchronize. A pod in the termination/deleting process would continue to run for an arbitrary amount of time. As a result of that, it very likely to happen that multiple copies of one pod are running concurrently.

Deployment, building atop ReplicaSet, can't offer the at-most-one guarantee too.

@ksexton

Michelle Au

unread,
Mar 28, 2018, 9:00:02 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

AccessModes as defined today, only describe node attach (not pod mount) semantics, and doesn't enforce anything.

Michelle Au

unread,
Mar 28, 2018, 9:01:07 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@orainxiong StatefulSet pods may be better in this regard. Because the pods are recreated with the same stable name, a replacement pod cannot be created until the old one is deleted. However, you could still do things like pod force delete, which won't wait for volumes to gracefully unmount from a node.

orain

unread,
Mar 28, 2018, 11:26:53 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@msau42 Thanks for your suggestion.

But, it is also very likely to create two copies of one pod during cluster partition even with StatefulSet

There should be a fencing policy that is able to reconcile master-node partition and protect data against data corruption.

more details : #61832

Ryan Schneider

unread,
Jun 1, 2018, 12:31:08 PM6/1/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Thanks for the discussion here, I recently thought there was a bug in the documentation as well since it talks about nodes instead of pods for Access Modes.

What concerns me the most here is that from a user perspective it is all presented in an abstracted way between the Pod spec, PVC, and PV that makes it look like it’s all pod level, but it’s not.

In this secnario, if a PV has and accessMode: ReadWriteOnce and it is bound to a PVC and there is currently one Pod bound to the PVC. When a second pod tries to bind to the PVC, I would expect that pod creation to fail/error at that point, even before it may get scheduled to the same node.

fejta-bot

unread,
Aug 30, 2018, 1:04:03 PM8/30/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Sep 29, 2018, 1:26:31 PM9/29/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Oct 29, 2018, 2:19:53 PM10/29/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

unread,
Oct 29, 2018, 2:19:59 PM10/29/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #60903.

k8s-ci-robot

unread,
Oct 29, 2018, 2:19:59 PM10/29/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

kubanczyk

unread,
Dec 21, 2018, 3:12:13 PM12/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/reopen
/remove-lifecycle rotten

Kubernetes Prow Robot

unread,
Dec 21, 2018, 3:12:28 PM12/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@kubanczyk: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

melwynjensen

unread,
Jun 14, 2019, 5:49:03 AM6/14/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Luzhenxing

unread,
Jun 25, 2019, 11:17:11 PM6/25/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Had the same question.i can create different pod in different node with the same pvc.But i thought this should be error because i had set the accessmode as ReadWriteOnce.Can someboby tell me if this is a bug or i have a missunderstand.

[root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat mongodb-pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs02
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 10.46.178.117
    path: "/home/lzx/nfs/nfs01"
[root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat mongodb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc
spec:
  resources:
    requests:
      storage: 200Mi
  accessModes:
  - ReadWriteOnce
  storageClassName: ""
[root@controller213:~/kubernetes-in-action-master/Chapter06]$ kubectl get pvc
NAME          STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb-pvc   Bound     nfs01     200Mi      RWO                           1h
[root@controller213:~/kubernetes-in-action-master/Chapter06]$ cat busybox-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: busybox
spec:
  replicas: 3
  selector:
    app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: 10.46.177.91:5000/busybox:latest
        volumeMounts:
        - name: mongodb-data
          mountPath: /data/busybox
        command:
         - sleep
         - "3600"
      volumes:
      - name: mongodb-data
        persistentVolumeClaim:
          claimName: mongodb-pvc

[root@controller213:~/kubernetes-in-action-master/Chapter06]$ kubectl get pods -o wide
NAME            READY     STATUS    RESTARTS   AGE       IP               NODE
busybox-bzlkj   1/1       Running   0          59m       172.22.22.216    10.46.179.213
busybox-hqnpn   1/1       Running   1          1h        172.22.55.217    10.46.179.218
busybox-n78tm   1/1       Running   1          1h        172.22.22.193    10.46.179.213
busybox-ncx2z   1/1       Running   0          59m       172.22.154.134   10.46.179.214
busybox-v6tck   1/1       Running   1          1h        172.22.154.133   10.46.179.214
busybox-wzrtf   1/1       Running   0          59m       172.22.55.194    10.46.179.218

Zou Nengren

unread,
Jul 15, 2019, 2:50:58 AM7/15/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/reopen

Kubernetes Prow Robot

unread,
Jul 15, 2019, 2:50:59 AM7/15/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@zouyee: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Jul 15, 2019, 2:51:05 AM7/15/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Reopened #60903.

TsubasaEX

unread,
Jul 19, 2019, 5:46:54 AM7/19/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

So the question, is it about two nodes can't access a same PV or two pods can't access a same PV with RWO? Or both? Came into the same question.

fejta-bot

unread,
Oct 17, 2019, 5:48:18 AM10/17/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Joshua Kugler

unread,
Oct 17, 2019, 12:15:29 PM10/17/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/remove-lifecycle stale

Viktor Bogdanov

unread,
Dec 6, 2019, 3:28:12 AM12/6/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Is there a way to prevent two pods mount the same PVC even if they are scheduled to be run on the same node?

Viktor Bogdanov

unread,
Dec 8, 2019, 11:36:20 PM12/8/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The answer to my own question is pod anti-affinity. It is not the same as not to mount one volume to 2 pods scheduled on the same node. But anti-affinity can be used to ask scheduler not to run 2 pods on the same node. Therefore it prevents mounting one volume into 2 pods.

fejta-bot

unread,
Mar 8, 2020, 12:29:35 AM3/8/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Joshua Kugler

unread,
Mar 8, 2020, 4:01:48 PM3/8/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Jun 6, 2020, 4:54:00 PM6/6/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Jul 6, 2020, 5:36:16 PM7/6/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Aug 5, 2020, 6:17:15 PM8/5/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Aug 5, 2020, 6:17:27 PM8/5/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #60903.

Kubernetes Prow Robot

unread,
Aug 5, 2020, 6:17:28 PM8/5/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.


Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

NoNE

unread,
Jan 1, 2021, 12:08:17 PM1/1/21
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/reopen

I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce, We should at least update the docs and add an explanation of the PV lifecycle and emphasizing on that the binding process if on the node level and multiple pods which land on the same node can indeed write to a PV with ReadWriteOnce Access Mode.

Kubernetes Prow Robot

unread,
Jan 1, 2021, 12:08:30 PM1/1/21
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@0xMH: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce, We should at least update the docs and add an explanation of the PV lifecycle and emphasizing on that the binding process if on the node level and multiple pods which land on the same node can indeed write to a PV with ReadWriteOnce Access Mode.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Alex von Gluck IV

unread,
Feb 18, 2022, 12:36:11 PM2/18/22
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

These ticket bots are really annoying.

Just ran into this on our infra. I have some deployments with pods which are using RWO claims. I'd like to attach a pod to the same PVC to "load it up with data"... The language in the documentation makes this seem possible...

    ReadWriteOnce – the volume can be mounted as read-write by a single node

    ReadOnlyMany – the volume can be mounted read-only by many nodes

    ReadWriteMany – the volume can be mounted as read-write by many nodes

The most comment reason to run into RWO issues is always around provisioning data into them. The "delete a PVC and ask questions later" model of volumes, mixed with the common inability to "reattach" dynamic PV claims means people are often looking to restore data in a hurry.

In theory, it should be possible to run multiple pods on a single PVC within the same pool node. Deployment, or otherwise.


Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/60903/1044891081@github.com>

Saad Ali

unread,
Feb 22, 2022, 12:06:10 AM2/22/22
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

On going work to improve access modes: take a look at https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/

See feature issue with link to KEP here kubernetes/enhancements#2485

It's still in alpha so if there are changes/improvements you want to see provide feedback there or better yet, come join SIG Storage and help us move it beta/GA.


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/60903/1047428991@github.com>

Mateusz Łoskot

unread,
Jan 15, 2025, 8:30:32 AMJan 15
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@0xMH wrote

I don't think we reached a conclusion in here. If it's indeed intended that Multiple pods be able to write to a PV with ReadWriteOnce

The blog linked by @saad-ali above i.e. https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/ clearly states

The ReadWriteOnce access mode restricts volume access to a single node, which means it is possible for multiple pods on the same node to read from and write to the same volume.

However, ...

We should at least update the docs (...) if multiple pods which land on the same node can indeed write to a PV with ReadWriteOnce Access Mode.

current documentation on ReadWriteOnce still leaves room for misinterpretation of the access in "multiple pods to access the volume" bit.

It happens Kubernetes content creators make mistake sharing incorrect or inaccurate information.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/60903/2592863002@github.com>

Reply all
Reply to author
Forward
0 new messages