Re: [kubernetes/kubernetes] How to create a kubernetes NFS volume on Google Container Engine (#44377)

2,804 views
Skip to first unread message

Saad Ali

unread,
Apr 12, 2017, 8:15:24 PM4/12/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

CC @kubernetes/sig-storage-bugs


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

mappedinn

unread,
Apr 13, 2017, 1:01:31 AM4/13/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I am using

  • gcloud 150.0.0
  • kubectl v1.6.0 for client
  • kubectl v1.5.6 for server

Andrei

unread,
Apr 13, 2017, 9:07:54 AM4/13/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@mappedinn if you want to configure NFSv4 you can follow tutorial https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/nfs/README.md

BUT:
Edit examples/volumes/nfs/nfs-pv.yaml
change the last line to path: "/"

Edit examples/volumes/nfs/nfs-server-rc.yaml
change the image to the one that enabled NFSv4
image: gcr.io/google_containers/volume-nfs:0.8

It works fine for COS images, kubernetes 1.6.0

mappedinn

unread,
Apr 13, 2017, 9:38:21 AM4/13/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

thanks @ToGoBananas

I will be trying the proposed changes and I will be back...

Michelle Au

unread,
Apr 14, 2017, 7:17:14 PM4/14/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The nfsv3 issue will be fixed in GKE v1.5.7, which is not out yet.

Dmitry S. Vlasov

unread,
Apr 19, 2017, 5:39:17 PM4/19/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I've the same issue (and exact the same confiuration similar to https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs)

Change image to gcr.io/google_containers/volume-nfs:0.8 and path to / - does not help

Looks like "consumer-containers" (nfs-clients) works fine only if they spawn on the same node as the firs successful mount happend:

In my case I have two-nodes cluster, 1 x nfs-server, 2 x php-nginx. Every time I bring up both php-nginx, one of them failed in state "ContainerCreating" with error message:

Failed to attach volume "pvc-4cbf3696-24f4-11e7-b79b-42010a800230" on node "gke-team-cluster-default-pool-392d2245-0qwc" with: googleapi: Error 400: The disk resource 'projects/team/zones/us-central1-a/disks/gke-team-cluster--pvc-4cbf3696-24f4-11e7-b79b-42010a800230' is already being used by 'projects/team/zones/us-central1-a/instances/gke-team-cluster-default-pool-392d2245-f7ps'
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"frontend-1178593038-bcpjk". list of unattached/unmounted volumes=[nfs-share]

If I delete failed pod and it spawn on "right" node - all ok.

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

mappedinn

unread,
Apr 20, 2017, 2:05:41 PM4/20/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I tried the version 0.8 of nfs-server and this version is not available as it can be seen below:

$ kubectl describe pods nfs-server-3780251807-qv0rg 0s
Name:		nfs-server-3780251807-qv0rg
Namespace:	default
Node:		gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb/10.240.0.2
Start Time:	Thu, 20 Apr 2017 21:52:26 +0400
Labels:		pod-template-hash=3780251807
		role=nfs-server
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nfs-server-3780251807","uid":"1d34e3d1-25f2-11e7-8708-42010a8e01...
		kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nfs-server
Status:		Pending
IP:		10.244.2.3
Controllers:	ReplicaSet/nfs-server-3780251807
Containers:
  nfs-server:
    Container ID:	
    Image:		gcr.io/google-samples/nfs-server:0.8
    Image ID:		
    Ports:		2049/TCP, 20048/TCP, 111/TCP
    State:		Waiting
      Reason:		ImagePullBackOff
    Ready:		False
    Restart Count:	0
    Requests:
      cpu:		100m
    Environment:	<none>
    Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lgzv2 (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  mypvc:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc
    ReadOnly:	false
  default-token-lgzv2:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-lgzv2
    Optional:	false
QoS Class:	Burstable
Node-Selectors:	<none>
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From								SubObjectPath			Type		Reason		Message
  ---------	--------	-----	----								-------------			--------	------		-------
  7m		7m		1	default-scheduler										Normal		Scheduled	Successfully assigned nfs-server-3780251807-qv0rg to gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb
  7m		1m		6	kubelet, gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb	spec.containers{nfs-server}	Normal		Pulling		pulling image "gcr.io/google-samples/nfs-server:0.8"
  7m		1m		6	kubelet, gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb	spec.containers{nfs-server}	Warning		Failed		Failed to pull image "gcr.io/google-samples/nfs-server:0.8": Tag 0.8 not found in repository gcr.io/google-samples/nfs-server
  7m		1m		6	kubelet, gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb					Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "nfs-server" with ErrImagePull: "Tag 0.8 not found in repository gcr.io/google-samples/nfs-server"

  7m	6s	29	kubelet, gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb	spec.containers{nfs-server}	Normal	BackOff		Back-off pulling image "gcr.io/google-samples/nfs-server:0.8"
  7m	6s	29	kubelet, gke-mappedinn-cluster-default-pool-d69e6f8b-wpvb					Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "nfs-server" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google-samples/nfs-server:0.8\""

Error from server (NotFound): pods "0s" not found

mappedinn

unread,
Apr 20, 2017, 3:17:40 PM4/20/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #44377.

mappedinn

unread,
Apr 20, 2017, 3:17:58 PM4/20/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Problem solved...

I just used the image: gcr.io/google_containers/volume-nfs:0.8 and create

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.247.249.98
    path: "/"     # it was "/exports"

Thanks for being helpfull...

Jonathan Cope

unread,
Apr 24, 2017, 4:03:48 PM4/24/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@mappedinn Do you know why exporting / works instead of /exports?

mappedinn

unread,
May 1, 2017, 8:26:47 PM5/1/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Sorry @copejon for answering you late

Really I don't know why. This is the error I am getting if I am using exports

MountVolume.SetUp failed for volume "kubernetes.io/nfs/2736fdad-2ecc-11e7-99e0-42010af00037-nfs" (spec.Name: "nfs") pod "2736fdad-2ecc-11e7-99e0-42010af00037" (UID: "2736fdad-2ecc-11e7-99e0-42010af00037") with: mount failed: exit status 32 Mounting command: /home/kubernetes/bin/mounter Mounting arguments: 10.247.247.158:/exports /var/lib/kubelet/pods/2736fdad-2ecc-11e7-99e0-42010af00037/volumes/kubernetes.io~nfs/nfs nfs [] Output: Running mount using a rkt fly container run: group "rkt" not found, will use default gid when rendering images mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-2762569073-5n4qp". list of unattached/unmounted volumes=[my-pvc-nfs]

If you have any eventual explanations, I will be happy.

BA Aliou

unread,
May 17, 2017, 10:38:22 AM5/17/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

+1

mappedinn

unread,
May 21, 2017, 12:15:22 AM5/21/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Ho @copejon

I undestood how nfs works. All the content of the NFS volume is stored in /exports. NFS is working that way:

  • the content having path / on the PV, will be in /exports in the NFS server
  • the content having path /opt/data/ on the PV, will be in /exports/opt/data/ in the NFS server
  • etc...

For the second example of path /opt/data, you will have to get connected to the NFS server and create these folder as follows:

$ kubectl exec -ti nfs-server-45sf75-4564 bash
$ mkdir -p /exports/opt
$ mkdir -p /exports/opt/data
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.247.250.208
    path: "/opt/data/"

Hope it will help

Erik

unread,
Jul 6, 2017, 11:55:41 AM7/6/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The problem I am having is GKE creates a new PV for the client-side PVC instead of using the NFS one created. The result is you see a new DISK show up in GCE, client pods not on the node the PV was attached to error out, and touches to the attached volumes do not show up on the NFS server.

The example seems to expect that the PVC created with nfs-pvc.yaml will automatically match the PV created in nfs-pv.yaml. But, it appears as though it is not matching, causing it to fall back to default dynamic provisioning.

Here are my nfs-pv and nfs-pvc entries:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
#    server: nfs-server
    server: 10.59.248.162
    path: "/"    # it was "/exports"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

The 1Gi disk shows up in GCE when the PVC is created.

Jonathan Cope

unread,
Jul 10, 2017, 10:30:48 AM7/10/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The example PVC is missing an important line:

...
"metadata": {
   ...
    "annotations": {
        "volume.beta.kubernetes.io/storage-class": ""  // empty string value signals the controller to not dynamicall provision
    }
  },

Since 1.6(?) Kubernetes has dynamic provisioning turned on by default. So when a PVC is created, if it's missing the volume.beta.kubernetes.io/storage-class key value, it assumes the default storage class and provisions and binds a volume.

To disable this behavior, set the key value as "volume.beta.kubernetes.io/storage-class": "".

Erik Sundell

unread,
Dec 28, 2017, 3:07:44 PM12/28/17
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

UPDATE: Use storageClassName instead of volume.beta.kubernetes.io/storage-class.

In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of the storageClassName attribute. This annotation is still working, however it will become fully deprecated in a future Kubernetes release.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class

Daniel

unread,
May 22, 2019, 3:43:47 PM5/22/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@z1nkum did you ever get to the bottom of why the NFS consumers must be on the same node?

I have having a similar issue, but with more nodes/pools, but as you've explained with an nfs backed pv/pvc only pods scheduled to the same node as the "nfs-server" pod are able to mount correctly.

Charles Thayer

unread,
Oct 23, 2019, 7:36:13 PM10/23/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hmm, setting the storageClassName to the empty-string didn't work for me on GKE.

kind: PersistentVolumeClaim                                                                                                                                                                                                                                                                                                                                                         
apiVersion: v1
metadata:
  name: shared-pvc-staging
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:                                                                                                                                                                                                                                                                                                                                                                        
    requests:                                                                                                                                                                                                                                                                                                                                                                       
      storage: 20Gi                                                                                                                                                                                                                                                                                                                                                                 
:::: fang 04:26:48 (cgt-publish-recovery) 0 twilio; kubectl describe pvc shared-pvc-staging                                                                                                                                                                                                                                                                                         
Name:          shared-pvc-staging                                                                                                                                                                                                                                                                                                                                                   
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type       Reason         Age               From                         Message
  ----       ------         ----              ----                         -------
  Normal     FailedBinding  8s (x4 over 40s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
Mounted By:  <none>


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Charles Thayer

unread,
Oct 24, 2019, 11:44:38 AM10/24/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Ok, now in Oct 2019, it appears that for GKE the right approach is

  1. create a filestore volume
  2. create a k8s PersistentVolume
  3. finally create the k8s PersistentVolumeClaim

Details are here: https://cloud.google.com/filestore/docs/accessing-fileshares
For my purposes this doesn't work because the minimum charges is around $200 US/month for 1TB, and I just need a tiny 10GB backup for some small shared ReadWriteMany data :-/

apiVersion: v1
kind: PersistentVolume
metadata:
  name: fileserver
spec:
  capacity:
    storage: [STORAGE]
  accessModes:
  - ReadWriteMany
  nfs:
    path: /[FILESHARE]
    server: [IP_ADDRESS]

and

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fileserver-claim
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: [STORAGE]


Michelle Au

unread,
Oct 24, 2019, 12:13:59 PM10/24/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@cgthayer, you should still be able to bring your own nfs on gke, but you need to create the PVs beforehand, or use something like the nfs-client provisioner. If you're still having issues, can you open a new issue, since this one has been resolved?

nikhilbhalwankar

unread,
Nov 12, 2020, 4:06:22 AM11/12/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The problem I am having is GKE creates a new PV for the client-side PVC instead of using the NFS one created. The result is you see a new DISK show up in GCE, client pods not on the node the PV was attached to error out, and touches to the attached volumes do not show up on the NFS server.

The example seems to expect that the PVC created with nfs-pvc.yaml will automatically match the PV created in nfs-pv.yaml. But, it appears as though it is not matching, causing it to fall back to default dynamic provisioning.

Here are my nfs-pv and nfs-pvc entries:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
#    server: nfs-server
    server: 10.59.248.162
    path: "/"    # it was "/exports"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

The 1Gi disk shows up in GCE when the PVC is created.

The nfs-server looks fine, with a 10 Gi volume attached to /exports. Switching to gcr.io/google_containers/volume-nfs:0.8 had no effect.

If I try forcing a match by adding a label to the nfs PV and using matchLabels on the PVC, the PVC is stuck in "pending". Yet, the nfs PV is "available".

Using GKE 1.6.4.

I tried this out on my GKE environment. However, I attached two different persistent disks as follows,

mkdir /exports/data1 -> Mounted Disk1 on this folder
mkdir /exoprts/data2 -> Mounted Disk 2 on this folder

The contents are correctly seen after mounting on NFS pod but when these are mounted on woker pods, the contents are shown as blank. What can be done toresolve this issue?

nikhilbhalwankar

unread,
Nov 12, 2020, 4:34:51 AM11/12/20
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Ho @copejon

I undestood how nfs works. All the content of the NFS volume is stored in /exports. NFS is working that way:

  • the content having path / on the PV, will be in /exports in the NFS server
  • the content having path /opt/data/ on the PV, will be in /exports/opt/data/ in the NFS server
  • etc...

For the second example of path /opt/data, you will have to get connected to the NFS server and create these folders as follows:

$ kubectl exec -ti nfs-server-45sf75-4564 bash
$ mkdir -p /exports/opt
$ mkdir -p /exports/opt/data
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity
:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.247.250.208
    path: "/opt/data/"

Hope it will help

I tried this out on my GKE environment. However, I attached two different persistent disks as follows,

Reply all
Reply to author
Forward
0 new messages