GKE Private cluster - accessing master

2,502 views
Skip to first unread message
Assigned to zhangliq...@gmail.com by me

Vinita

unread,
May 8, 2018, 3:01:02 PM5/8/18
to Kubernetes user discussion and Q&A
I have created a private cluster and VM in the same network. I added VM's internal IP in private cluster's master authorized network. From VM, after obtaining cluster credentials, I am not able to execute kubectl commands. However,  if I add VM's external IP to master authorized network I am able to execute kubectl commands. This behavior is not consistent with the documentation. Not sure if I am missing something here.

Alan Grosskurth

unread,
May 9, 2018, 3:03:19 PM5/9/18
to kubernet...@googlegroups.com, vjo...@etouch.net
Hi Vinita,

I believe the problem is that currently "gcloud container clusters get-credentials" always writes the master's external IP address to ~/.kube/config. So kubectl always talks to that external IP address (via the external IP address of the VM it's running on).

You should be able to modify ~/.kube/config on your VM to tell kubectl to talk to the master's internal IP address.

First, find the endpoint resource containing the master's internal IP address. For example:

    $ kubectl get endpoints kubernetes
    NAME         ENDPOINTS        AGE
    kubernetes   172.16.0.1:443   1d

Then open ~/.kube/config and find the section for your cluster. For example:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: REDACTED
        server: https://104.198.205.71
      name: gke_myproject_us-central1-c_mycluster

Replace the external address (https://104.198.205.71) with the internal address (https://172.16.0.1). The kubectl command should now work, provided Master Authorized Networks allows access from the VM's internal IP address. Note that all of these IP addresses will be different depending on your environment.

Let me know if this helps. I agree this isn't very straightforward---I'm looking into potential ways this setup could be improved.

Thanks,

---Alan

On Tue, May 8, 2018 at 12:01 PM Vinita <vjo...@etouch.net> wrote:
I have created a private cluster and VM in the same network. I added VM's internal IP in private cluster's master authorized network. From VM, after obtaining cluster credentials, I am not able to execute kubectl commands. However,  if I add VM's external IP to master authorized network I am able to execute kubectl commands. This behavior is not consistent with the documentation. Not sure if I am missing something here.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Vinita

unread,
May 9, 2018, 4:58:19 PM5/9/18
to Kubernetes user discussion and Q&A
Hi Alan,

Thanks for your reply. I tried your workaround but the certificate is not valid for master's internal IP address. I get below error -
Unable to connect to the server: x509: certificate is valid for 35.224.109.130, 10.118.16.1, 172.16.0.2, not 172.16.0.3Thanks,
Vinita

Mayur Nagekar

unread,
May 9, 2018, 5:36:40 PM5/9/18
to kubernet...@googlegroups.com
What does `kubectl get endpoints kubernetes` show in your case ?

-Mayur

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.



--

Thanks,

Mayur

Vinita

unread,
May 14, 2018, 4:48:38 PM5/14/18
to Kubernetes user discussion and Q&A
Hi Mayur,

Now I have created new private cluster. I tried 2 scenarios -
Scenario - 1
Executing kubectl commands from VM in same project within same network. 
I added VM's internal IP in master authorized network.
I connected to cluster -
gcloud container clusters get-credentials <cluster-name> --zone us-central1-a --project <project-name>

kubectl get endpoints kubernetesNAME ENDPOINTS AGEkubernetes 172.16.0.3:443 1d
kubectl config set-cluster <my-cluster-name> --server=https://172.16.0.3

When I try kubectl get services - it gives error as 
Unable to connect to the server: x509: certificate is valid for 35.224.109.130, 10.118.16.1, 172.16.0.2, not 172.16.0.3I changed context again as
kubectl config set-cluster <my-cluster-name> --server=https://172.16.0.2

Then it worked.

Scenario - 2
Executing kubectl commands from VM in different project same network (VPN Peered network) 

I added VM's internal IP in master authorized network.
I connected to cluster -
gcloud container clusters get-credentials <cluster-name> --zone us-central1-a --project <project-name>


kubectl config set-cluster <my-cluster-name> --server=https://172.16.0.2

kubectl get services - I get below error.

Unable to connect to the server: x509: certificate signed by unknown authority

My use case is scenario -2 where I am trying to access private cluster master from CICD project.
Any help is appreciated.

Thanks,
Vinita

Thanks,

Mayur

nikunj r

unread,
Jul 25, 2018, 12:41:53 PM7/25/18
to Kubernetes user discussion and Q&A
We are also trying to get through the same scenario. Posting this as I don't see any follow up response to the query. 
In order to run "kubectl get endpoints kubernetes", we need to be able to access the cluster. We do not have pod access to the Internet Gateway and hence it does not work.

Is there a way we can get an internal master ip via a command for a private cluster? 

Thanks,
Nikunj

Mauricio Castro

unread,
Sep 7, 2018, 12:49:57 PM9/7/18
to Kubernetes user discussion and Q&A
Hey guys I was facing this issue and just posted a similar thread subject, but after I saw you problem I tried exactly what was suggested here and I  got:

[mscastro@instance-1 ~]$ kubectl get endpoints kubernetes
NAME         ENDPOINTS        AGE
kubernetes   172.16.4.4:443   7h


Edited kube config...

[mscastro@instance-1 ~]$ kubectl cluster-info
Kubernetes master is running at https://172.16.4.4

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: x509: certificate is valid for 35.199.121.133, 172.16.8.1, 172.16.4.2, not 172.16.4.4

My master range is 172.16.4.0/28 and the command was:
[mscastro@instance-1 ~]$ kubectl get endpoints kubernetes
NAME         ENDPOINTS        AGE
kubernetes   172.16.4.4:443   7h

but look up there it says the certificate is valid for another one in the range 172.16.4.2. both have 443 running, so I replaced 172.16.4.4 for 172.16.4.2 and worked !!!

Thank you all!

I can now close the public IP to the world forever, I actually don't even need it anymore, can I get rid of it?
Reply all
Reply to author
Forward
0 new messages