Re: [kubernetes/kubernetes] volumes/nfs example: service name instead hardcoded IP (#44528)

6 views
Skip to first unread message

Saad Ali

unread,
Apr 21, 2017, 6:35:26 PM4/21/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

CC @kubernetes/sig-storage-feature-requests


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

fejta-bot

unread,
Dec 24, 2017, 12:32:32 PM12/24/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot

unread,
Jan 23, 2018, 1:20:19 PM1/23/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.

/lifecycle rotten
/remove-lifecycle stale

June Tate-Gans

unread,
Feb 4, 2018, 2:49:36 AM2/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Ran into this, myself, while attempting to configure my storageclasses to speak to a heketi glusterfs pod by service name. When I rebooted my cluster, the cluster IP changed, which broke my glusterfs storage solution.

Michelle Au

unread,
Feb 4, 2018, 12:41:38 PM2/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I believe the issue is the node's resolv.conf needs to be configured to point to Kubernetes' dns service.

This is a host configuration that needs to be done per deployment. I believe we do it automatically for GCE/GKE but I'm unsure about other environments. cc @jingxu97

June Tate-Gans

unread,
Feb 4, 2018, 1:42:45 PM2/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
So how would I setup the resolve.conf files on each node to resolve to
kube-dns? The service for it is setup as a ClusterIP service, and in a CNI
environment like flannel, that becomes non accessible from outside of a
container.

Can I edit the service to set it up as a NodePort and then direct
resolve.conf to every node in the cluster?


On Feb 4, 2018 9:41 AM, "Michelle Au" <notifi...@github.com> wrote:

> I believe the issue is the node's resolv.conf needs to be configured to
> point to Kubernetes' dns service.
>
> This is a host configuration that needs to be done per deployment. I
> believe we do it automatically for GCE/GKE but I'm unsure about other
> environments. cc @jingxu97 <https://github.com/jingxu97>
>
> —
> You are receiving this because you commented.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/44528#issuecomment-362924770>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAkVBXltTyCzpb8DYn0gh9DheDhoy9xJks5tRevQgaJpZM4M-X4Q>
> .

Michelle Au

unread,
Feb 5, 2018, 2:22:39 PM2/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hm maybe @kubernetes/sig-network-misc knows something that can be done here.

The problem is volume mounts are done by kubelet, so the nfs server IP/hostname needs to be accessible to kubelet's network.

June Tate-Gans

unread,
Feb 5, 2018, 2:32:54 PM2/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Which on a baremetal deployment should be the node's/host's network, correct?

Michelle Au

unread,
Feb 5, 2018, 2:35:13 PM2/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Correct. So if your nfs server is being provided by a Pod, then you need the node/host network to be able to access the pod's network, which like you pointed out, could be tricky depending on how you've configured your network.

June Tate-Gans

unread,
Feb 5, 2018, 2:46:46 PM2/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Okay, so I see two solutions to this problem, then:

  1. Adjust the Service for kube-dns from ClusterIP to NodePort and adjust the node's /etc/resolv.conf to point to the local IPs to get name resolution working.
  2. Adjust the Service for NFS or GlusterFS (in my case) from ClusterIP to NodePort, and then change the StorageClass to point to one of the node's static IPs in the node's subnet.

Of the two, the first seems like the more generic solution for getting name resolution working across the cluster, but may have unintended side-effects if things are setup to expect it as a ClusterIP. The second solves this direct problem. I'll try the second option when I get home tonight and report back.

We may want to update the public facing docs to mention that StorageClass definitions use the kubelet's network so that others don't run aground when trying to set this up.

yaumeg

unread,
Feb 20, 2018, 6:09:48 PM2/20/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Did you have a chance to experiment a bit ? For solution 2), Nodeport define ports in the 30000-32767 range, so it seems also necessary to modify nfs PV default ports ? (2049 & 111)

I have a baremetal setup with flannel, and editing resolv.conf doesn't work, because like you pointed out, my nodes don't have access to container's network.

June Tate-Gans

unread,
Feb 24, 2018, 10:10:35 PM2/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Unfortunately, I can't /edit/ the restUrl for my storageclass because "updating parameters is illegal"... I may end up losing data by deleting the storage class and recreating it with the right url.

June Tate-Gans

unread,
Feb 24, 2018, 10:18:14 PM2/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Fortunately my analysis was wrong -- didn't lose any data at all, thankfully.

So what I've done is update my Services to be NodePorts, exposed them on port 32708, and set my resturls in the storageclasses to http://<random-node-ip-from-cluster>:32708. This allows things to continue to work, but there are two major downsides now:

  1. My REST endpoint is available from outside of the cluster.
  2. If the node that I chose in the rest URL falls over, my storage provisioner becomes unreachable because I can't edit the resturls once I've set them.
  3. My REST endpoint is avialable from outside of the cluster.
  4. My REST endpoint is avialable from outside of the cluster.

But things work at least.

Zihong Zheng

unread,
Feb 24, 2018, 10:42:42 PM2/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

When I rebooted my cluster, the cluster IP changed, which broke my glusterfs storage solution.

@jtgans Sorry that I might be missing the point, why rebooting cluster changes the cluster IP? Did the NFS service got deleted and recreated? If so, what about giving the NFS service a fixed IP in manifest and keeping it as type=ClusterIP?

June Tate-Gans

unread,
Feb 25, 2018, 11:19:11 AM2/25/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I actually didn't realize that ClusterIP services could specify their IPs in the definition. I'll give this a try today.

Pretty sure I didn't recreate the service post reboot, but it's been a while since I restarted the cluster.

mtricolici

unread,
Mar 14, 2018, 9:18:10 AM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

use full service name, it works fine:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-zuzu
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs-server.build.svc.cluster.local
    path: "/"

fejta-bot

unread,
Apr 13, 2018, 9:29:43 AM4/13/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

unread,
Apr 13, 2018, 9:30:00 AM4/13/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Closed #44528.

朱聖黎

unread,
Jun 15, 2018, 5:02:37 AM6/15/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mtricolici Tried. Didn't work.

Sjoerd Wenker

unread,
Aug 6, 2018, 5:06:15 AM8/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mtricolici @digglife 'build' should be replaced by the namespace.
E.g. when using no namespace (default), the value for server should be nfs-server.default.svc.cluster.local

Paul Mazzuca

unread,
Aug 22, 2018, 11:41:29 AM8/22/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I have also tried using the service name, and that did not work. Can you clarify the kubectl command that gets the "full" service name that resolves?

Paul Mazzuca

unread,
Aug 22, 2018, 11:53:27 AM8/22/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Take that back. I just got it to work using "{service-name}.{namespace}.svc.cluster.local". I did not realize that svc.cluster.local was always the same.

wsourdin

unread,
May 15, 2019, 1:01:27 PM5/15/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I just tried {service-name}.{namespace}.svc.cluster.local on EKS and it doesn't work.

e.g. service name nfs-service on default namespace

I got

mount.nfs: Failed to resolve server nfs-service.default.svc.cluster.local: Name or service not known

They are still no fix or clean workaround available?

Bruno Ferreira

unread,
May 21, 2019, 9:30:40 AM5/21/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I just tried {service-name}.{namespace}.svc.cluster.local on EKS and it doesn't work.

e.g. service name nfs-service on default namespace

I got

mount.nfs: Failed to resolve server nfs-service.default.svc.cluster.local: Name or service not known

They are still no fix or clean workaround available?

I'm also having this issue on EKS.

Mike Daniel

unread,
May 25, 2019, 6:17:34 AM5/25/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Also having this issue on Digital Ocean.

rjohnson3

unread,
Jul 2, 2019, 1:10:58 PM7/2/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/reopen

Kubernetes Prow Robot

unread,
Jul 2, 2019, 1:11:15 PM7/2/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@rjohnson3: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Andrew Meredith

unread,
Jul 10, 2019, 5:33:22 PM7/10/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Also having this issue on GKE

noeliajimenezg

unread,
Jul 30, 2019, 4:59:26 AM7/30/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Same issue in OpenStack.

Lin Zhiqiang

unread,
Sep 12, 2019, 4:35:52 AM9/12/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Same issue on NFS

Wang Weiming

unread,
Sep 17, 2019, 4:21:27 AM9/17/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I just tried {service-name}.{namespace}.svc.cluster.local on EKS and it doesn't work.

e.g. service name nfs-service on default namespace

I got

mount.nfs: Failed to resolve server nfs-service.default.svc.cluster.local: Name or service not known

They are still no fix or clean workaround available?

I'm also having this issue on AKS.

Kubernetes Prow Robot

unread,
Sep 17, 2019, 4:21:42 AM9/17/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@will-beta: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Wang Weiming

unread,
Sep 17, 2019, 4:21:53 AM9/17/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/reopen

cl4u2

unread,
Nov 4, 2019, 10:27:54 AM11/4/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Same issue on CDK.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Goran

unread,
Feb 17, 2020, 4:18:33 PM2/17/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

From what I just learned this is the issue only on non-GKE Kubernetes. Can't wait for upstream fix.so we can get proper service DNS name resolution on all providers.

Any progress perhaps?

Cole Arendt

unread,
Feb 23, 2020, 5:55:09 AM2/23/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Goran

unread,
Feb 23, 2020, 4:09:03 PM2/23/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Yeah, regardless, the question still applies as there was no upstream solution at the time of the docs writing.

What does GKE does that allow for this different behavior from upstream Kubernetes?

Marek00Malik

unread,
Nov 23, 2020, 5:33:18 AM11/23/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

has there been any work on this ?

h0jeZvgoxFepBQ2C

unread,
Dec 15, 2020, 4:16:07 AM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@saad-ali @thockin Could you reopen this issue please?

It's not possible for non-maintainers to reopen it.

h0jeZvgoxFepBQ2C

unread,
Dec 15, 2020, 4:26:04 AM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Did anyone found a solution for this? It really surprises me that this (imo big) issue hasn't been resolved in 3 years?
How do people cope with this situation, if you want to run multiple nfs servers - you can't always hardcode the IPs?

Any suggestions on how to workaround this?
Specifing nfs-service.default.svc.cluster.local didn't work out for us.

Goran

unread,
Dec 15, 2020, 4:51:30 AM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I'm not sure but could be using the ExternalName service be a viable solution?

I was under impression that this particular object was created to solve these issues. I didn't try it yet but would welcome feedback from those who did, regardless of the outcome.

Cole Arendt

unread,
Dec 15, 2020, 6:05:53 AM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

FWIW I have worked around this by using the nfs-server-provisioner helm chart and moving on with my life. I will say, something has changed about helm's website and now there seem to be two (identical?) options for this, which is a bit weird.

https://artifacthub.io/packages/helm/kvaps/nfs-server-provisioner

Hope that helps! It has worked well enough for me! It would definitely be nice to have a fix though!

A coworker dug into the source and suspected the bug was here, in case it is any help: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/nfs/nfs.go#L256

Michelle Au

unread,
Dec 15, 2020, 12:20:16 PM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

The issue is in some environments, kubelet's host network does not have access to the cluster dns. Using https://github.com/kubernetes-csi/csi-driver-nfs should resolve this because it runs as a Pod so has access to cluster services.

h0jeZvgoxFepBQ2C

unread,
Dec 15, 2020, 12:23:05 PM12/15/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

This does not appear to us as far as I understand it, when I connect via shell I can ping the nfs server directly via nfs-server-service and the IP resolution works fine. So the the kube proxy knows where our nfs server is - it's just that the volume doesn't know it.

andre-lx

unread,
Jan 18, 2021, 10:59:59 AM1/18/21
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

As @msau42 mentioned, I solved this issue using the https://github.com/kubernetes-csi/csi-driver-nfs

Reply all
Reply to author
Forward
0 new messages