—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
I can look into fixing this if no-one else has already made a start on it?
@matt-tyler feel free to take it, thanks!
Ok, I think I have an understanding of what is happening
The call to gcloud auth application default login is storing credentials in a json file in a known location.
On a subsequent credential-requiring kubectl command, the GCP plugin is being loaded, and the application default credentials are being exchanged for a token. That token is then cached in the .kube/config file.
So where do things go wrong?
As far as I can tell, if I switch to a cluster-2 without refreshing the application default credentials from it's separate account, those existing credentials will still be exchanged successfully for an oauth2 token and cached against cluster-2. Those tokens will then fail against a call to cluster-2 - because they were intended for cluster-1.
The only sure-fire fix I can think of would be to ensure the application default credentials client ID is stored in users[].user.authprovider. Doing this would at least enable the ability to verify that the default credentials were valid for that particular cluster, and if not, prompt the user with a message to correctly aquire the application default credentials for it. Lets call this option 1.
Of course, there is a problem here with determining what the Client ID is in the first place. I think this would require that a call to gcloud container clusters get-credentials <cluster-name> would store the client ID in the generated config. This would require exposing the client id from oauth2/google.
Otherwise, perhaps storing the client ID the first time around would be enough - assuming the user at least manages to authenticate successfully against the cluster in the first instance. Then only oauth2/google would require a change, and not the gcloud command.
The other option (option 2) would be, upon failure, check if CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS is true and if-so, prompt the user that their application default credentials may be invalid for the target cluster, and to either prompt or clear the currently cached credentials in .kube/config. This would probably mean extending the auth provider interface such that a function may be provided that can be called upon failure that enables the auth provider plugin to do some checking of the state and to provide a reasonable message as to what might be wrong with the credentials that have been provided.
I feel like option 2 is probably the better option in the long term. Different auth plugins are likely to have different failure modes. Providing a way to communicate back to the user about possible edge cases seems a little more flexible.
Any thoughts?
Just come across this, its particularly strange to have to edit the config file, when there is a folder cache in the .kube folder.
Is it really fixed?
I still can reproduce it with description @jdanbrown provided in initial comment.
I also reproduced this, unfortunately. Tried to use kubectl to access a cluster from an unauthorised account. Then, switched to the authorised account but the invalid authentication was cached. To resolve it, I set the users[*].user.auth-provider.config.expiry to an expired date.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.![]()
Believe I'm hitting this issue as well
I just ran into this as well, gcloud 311.0.0, kubectl 1.18.6.
I've got the same issue. I need to change the expiry time to 1 year back as @grant-zietsman pointed out.
Solution that usually works for me
- name: user-name-alias
user:
auth-provider:
config:
access-token: xxx
cmd-args: config config-helper --format=json --account=my-servic...@project-name.iam.gserviceaccount.com
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2020-11-06T19:45:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
So it will use different credentials when switching contexts
yq eval -i '.users[].user.auth-provider.config.expiry = "2020-01-01T12:00"' ~/.kube/config
You'll need yq installed (brew install yq).