Re: [kubernetes/kubernetes] kubectl: Bad creds get cached in ~/.kube/config, causing user confusion until expiry (or manual file edit) (#38075)

3 views
Skip to first unread message

Michail Kargakis

unread,
Apr 23, 2017, 12:18:27 PM4/23/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@kubernetes/sig-cli-bugs


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Matthew Tyler

unread,
May 8, 2017, 7:25:57 AM5/8/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I can look into fixing this if no-one else has already made a start on it?

Fabiano Franz

unread,
May 8, 2017, 11:55:07 AM5/8/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@matt-tyler feel free to take it, thanks!

Matthew Tyler

unread,
May 20, 2017, 4:28:53 AM5/20/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Ok, I think I have an understanding of what is happening

  1. The call to gcloud auth application default login is storing credentials in a json file in a known location.

  2. On a subsequent credential-requiring kubectl command, the GCP plugin is being loaded, and the application default credentials are being exchanged for a token. That token is then cached in the .kube/config file.

So where do things go wrong?

As far as I can tell, if I switch to a cluster-2 without refreshing the application default credentials from it's separate account, those existing credentials will still be exchanged successfully for an oauth2 token and cached against cluster-2. Those tokens will then fail against a call to cluster-2 - because they were intended for cluster-1.

The only sure-fire fix I can think of would be to ensure the application default credentials client ID is stored in users[].user.authprovider. Doing this would at least enable the ability to verify that the default credentials were valid for that particular cluster, and if not, prompt the user with a message to correctly aquire the application default credentials for it. Lets call this option 1.

Of course, there is a problem here with determining what the Client ID is in the first place. I think this would require that a call to gcloud container clusters get-credentials <cluster-name> would store the client ID in the generated config. This would require exposing the client id from oauth2/google.

Otherwise, perhaps storing the client ID the first time around would be enough - assuming the user at least manages to authenticate successfully against the cluster in the first instance. Then only oauth2/google would require a change, and not the gcloud command.

The other option (option 2) would be, upon failure, check if CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS is true and if-so, prompt the user that their application default credentials may be invalid for the target cluster, and to either prompt or clear the currently cached credentials in .kube/config. This would probably mean extending the auth provider interface such that a function may be provided that can be called upon failure that enables the auth provider plugin to do some checking of the state and to provide a reasonable message as to what might be wrong with the credentials that have been provided.

I feel like option 2 is probably the better option in the long term. Different auth plugins are likely to have different failure modes. Providing a way to communicate back to the user about possible edge cases seems a little more flexible.

Any thoughts?

Kubernetes Submit Queue

unread,
Jun 21, 2017, 4:32:32 PM6/21/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Closed #38075 via #46694.

Fred Cox

unread,
Jul 11, 2019, 3:15:34 AM7/11/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Just come across this, its particularly strange to have to edit the config file, when there is a folder cache in the .kube folder.

Robert Jerzak

unread,
Aug 2, 2019, 11:24:17 AM8/2/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Is it really fixed?
I still can reproduce it with description @jdanbrown provided in initial comment.

Grant Zietsman

unread,
Mar 30, 2020, 4:13:17 AM3/30/20
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I also reproduced this, unfortunately. Tried to use kubectl to access a cluster from an unauthorised account. Then, switched to the authorised account but the invalid authentication was cached. To resolve it, I set the users[*].user.auth-provider.config.expiry to an expired date.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Alex Woods

unread,
Jul 2, 2020, 4:25:05 PM7/2/20
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Believe I'm hitting this issue as well

Raman Gupta

unread,
Nov 6, 2020, 8:21:07 PM11/6/20
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I just ran into this as well, gcloud 311.0.0, kubectl 1.18.6.

Akash Agarwal

unread,
Dec 13, 2020, 5:09:53 AM12/13/20
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I've got the same issue. I need to change the expiry time to 1 year back as @grant-zietsman pointed out.

Pavel Evstigneev

unread,
Jun 29, 2021, 4:12:06 PM6/29/21
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Solution that usually works for me

- name: user-name-alias
  user:
    auth-provider:
      config:
        access-token: xxx
        cmd-args: config config-helper --format=json --account=my-servic...@project-name.iam.gserviceaccount.com
        cmd-path: /path/to/google-cloud-sdk/bin/gcloud
        expiry: "2020-11-06T19:45:22Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

So it will use different credentials when switching contexts

Alex Woods

unread,
Jul 27, 2021, 11:26:12 AM7/27/21
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Here's how to expire all of your users automatically

yq eval -i '.users[].user.auth-provider.config.expiry = "2020-01-01T12:00"' ~/.kube/config

You'll need yq installed (brew install yq).

Reply all
Reply to author
Forward
0 new messages