cc @kubernetes/sig-cli-feature-requests @kubernetes/sig-api-machinery-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@deads2k Is there a way to easily parse out an existing context out of the kubeconfig so I can create individual kubecfgs? Something like --minify, but with a --context parameter?
#64608 added code to honor --context
in kubectl config view --minify
in 1.11, so you can now do the following:
(umask 0077 && kubectl config view --minify --flatten --context=somecontext > local.kubeconfig)
export KUBECONFIG=local.kubeconfig
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Can the scope of this just be changed to remove context, since that is already covered by KUBECONFIG ?
So only add KUBECTL_NAMESPACE
This would alleviate the concern about competing environment configuration information.
Can the scope of this just be changed to remove context, since that is already covered by KUBECONFIG ?
So only addKUBECTL_NAMESPACE
This would alleviate the concern about competing environment configuration information.
namespace is also covered by kubeconfig, so that would still introduce competing values. the comment at #60044 (comment) still holds, just with one fewer envvar
Tools like gcloud
write new context entries to $KUBECONFIG[0]
or $HOME/.kube/config
, obscuring the multi-config solution and making it less likely that new users will be guided to do the suggested "right thing" of using multiple config files.
The workarounds suggested above using aliases are helpful but also broken because they introduce unpredictable behaviour across command-line usage and script usage of kubectl
. A user who's comfortably executing commands interactively in one context and leveraging the magic of their aliases that insert --context="${KUBECTL_CONTEXT}"
will eventually run a script that has kubectl
without --context="${KUBECTL_CONTEXT}"
and experience deep pain.
By canonizing KUBECTL_CONTEXT
and defining its behaviour in kubectl
, we could have a solution that is simple, consistent, and defined for all users of kubectl
. @liggitt 's comments about other config file consumers still apply, but as someone who's recently been burned quite badly by the current behaviour, I'm hopeful that this proposal will be reconsidered.
The problem with aliases is that they read the value of variables like KUBECTL_NAMESPACE at the time the alias is assigned, not when it is called. For example:
export KUBECTL_NAMESPACE="before"
alias foo="echo $KUBECTL_NAMESPACE"
foo
> before
export KUBECTL_NAMESPACE="after"
foo
> before
alias foo="echo $KUBECTL_NAMESPACE"
> after
So while this does work, it's still a bother. I've been working around this so far by wrapping my variable declaration into the alias command and just re-running that:
export KUBECTL_NAMESPACE=au10561 && alias k='kubectl --namespace="$KUBECTL_NAMESPACE"'
Anyway I think I've found a better solution which is to use a function (as suggested by @chancez waaaay back in 2016)
function k() { kubectl --namespace $KUBECTL_NAMESPACE --context="$KUBECTL_CONTEXT" ${@:1} }
Using this, I can change the KUBECTL_NAMESPACE or the KUBECTL_CONTEXT at any time and the function will read the variable as it is at the time the function is used, rather than declared.
Should also be possible to just escape $ so it isnt directly evaluated
@sbueringer is right and this works even better because it turns out the function doesn't support kubectl's autocompletions. Here's what I'm using now and it seems quite successful:
alias k="kubectl --namespace=\${KUBECTL_NAMESPACE}" --context=\${KUBECTL_CONTEXT}"
Hopefully that's helpful to others, too.
You can add something like this to your Bash Profile to dynamically support environment variables:
function k() {
args=()
if ! [[ -z "$KUBE_NS" ]]; then
args+=( "--namespace=$KUBE_NS" )
fi
if ! [[ -z "$KUBE_CTX" ]]; then
args+=( "--context=$KUBE_CTX" )
fi
kubectl "${args[@]}" "${@}"
}
Then, even if you have $KUBE_NS
set, you can still do:
k -n example get pods
To use different contexts in different terminals/shells and at the same time have this configuration picked up by other tools like helm I use the following in my .bash_profile
:
kc(){
export KUBECONFIG=$(mktemp -t kubeconfig)
cat ~/.kube/config >> $KUBECONFIG
kubectl config use-context $1 > /dev/null&& kn default
}
kn(){
kubectl config set-context --current --namespace=$1 > /dev/null
}
It simply copies the default config to a temp one and sets KUBECONFIG to point to it.
Example usage:
kc dev
- switch to dev context
kn kube-system
- switch to kube-system namespace
And to get bash completion, these functions can be used (from @ahmetb's comment above):
_kube_contexts()
{
local curr_arg;
curr_arg=${COMP_WORDS[COMP_CWORD]}
COMPREPLY=( $(compgen -W "$(kubectl config get-contexts --output='name')" -- $curr_arg ) );
}
complete -F _kube_contexts kc
_kube_namespaces()
{
local curr_arg;
curr_arg=${COMP_WORDS[COMP_CWORD]}
COMPREPLY=( $(compgen -W "$(kubectl get namespaces -o=jsonpath='{range .items[*].metadata.name}{@}{"\n"}{end}')" -- $curr_arg ) );
}
complete -F _kube_namespaces kn
We have a few clusters we are trying to make easier for our users to use. The kubeconfig files are generic as no user secrets are involved. We're using krb5 spnego to do the authentication. So we should be able to share the kubeconfig files between multiple users.
The only wrinkle is letting the user easily change the namespace. Copying the entire kubeconfig to customize is pretty heavy.
@kfox1111 Will this be any help?
set-namespace() {
kubectl config set-context $(kubectl config current-context) --namespace="${1:-default}"
}
_set_namespace() {
local cur=${COMP_WORDS[COMP_CWORD]}
local namespaces=$(kubectl get ns -o json|jq .items[].metadata.name|xargs)
COMPREPLY=( $(compgen -W "${namespaces}" -- $cur) )
}
complete -F _set_namespace set-namespace
The tricky bit is setting the namespace in the context means the kubeconfig file needs to be edited. I was hoping to use one kubeconfig per cluster for all users. An environment variable would work for that.
Just a quick glance at kubens, it looks like it edits the kubeconfig. So same issue.
Our login nodes have various shells installed and each user gets to pick their own shell. so while we could write functions/aliases, we'd have to do several of them for different shells adding more complexity. If kubectl honoured an env variable, it would be pretty seamless.
Looks like after 3 years this is still not possible. Are PRs here still welcome? Then I would look into it.
The request is still present, but it is still not clear that this would be a good thing to add.
/close
If there's a desire to add this functionality, a Kubernetes Enhancement Proposal (KEP) should be opened that describes how the new options would interact with existing scripts that use kubectl, as well as the impact on other client libraries that make use of the KUBECONFIG envvar to match client-go behavior across languages.
sig-cli (from a kubectl perspective) and sig-api-machinery (from a client-go/client library perspective) would be the SIGs most involved in reviewing that proposal. If there's interest in moving forward with this, I'd suggest raising this in those SIGs' meetings.
Closed #27308.
@liggitt: Closing this issue.
In response to this:
The request is still present, but it is still not clear that this would be a good thing to add.
/close
If there's a desire to add this functionality, a Kubernetes Enhancement Proposal (KEP) should be opened that describes how the new options would interact with existing scripts that use kubectl, as well as the impact on other client libraries that make use of the KUBECONFIG envvar to match client-go behavior across languages.
sig-cli (from a kubectl perspective) and sig-api-machinery (from a client-go/client library perspective) would be the SIGs most involved in reviewing that proposal. If there's interest in moving forward with this, I'd suggest raising this in those SIGs' meetings.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm still interested in this.
What impact do you envision? I'm not seeing how adding a new environment would effect existing clients using KUBECONFIG?
the comment at #60044 (comment) outlines the impact... a script that explicitly sets up the complete client config it wanted (by specifying the KUBECONFIG envvar or by passing --kubeconfig) would pick up namespace or context changes from the environment.
Ah. I understand. Thanks for the link. So the issue is, there are a mix of things going on right now, and fixing one might break another.
KUBECONFIG can be used to easily switch between clusters using a single config currently. This is good. You can only easily switch namespaces by writing user wide to that KUBECONFIG (cant easily share KUBECONFIGs) or making a KUBECONIFG per cluster+namespace (hard to maintain. You get a lot of redundancy)
What about a 3rd option? An "include" feature in a kubeconfig or some kind of include path? You could then come up with a common kubeconfig that had all of your clusters settings in it, still be read only, but the outermost kubeconfig would have to be writable, enabling namespace switching?
Maybe something like KUBECONFIG=/shared/mycluster.conf:~/.kube/mycluster.conf ?
Maybe something like KUBECONFIG=/shared/mycluster.conf:~/.kube/mycluster.conf ?
Done, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#merging-kubeconfig-files :)
users.kubeconfig:
apiVersion: v1 kind: Config users: - name: me user: token: my-token
clusters.yaml
apiVersion: v1 kind: Config clusters: - name: dev cluster: { server: https://dev } - name: prod cluster: { server: https://prod } - name: test cluster: { server: https://test }
contexts.yaml
apiVersion: v1 kind: Config contexts: - name: dev context: cluster: dev user: me - name: prod context: cluster: prod user: me - name: test context: cluster: test user: me
current.yaml:
apiVersion: v1 kind: Config current-context: dev
Combine the files in the KUBECONFIG envvar:
export KUBECONFIG=current.yaml:clusters.yaml:users.yaml:contexts.yaml
See the effective config:
kubectl config view --minify
apiVersion: v1 clusters: - cluster: server: https://dev name: dev contexts: - context: cluster: dev user: me name: dev current-context: dev kind: Config preferences: {} users: - name: me user: token: my-token
Credential modifications go to the file that defined the user:
kubectl config set-credentials me --token=changed-token cat users.yaml
apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: - name: me user: token: changed-token
Context modifications go to the first file that defined current-context:
kubectl config use-context test cat current.yaml > ```yaml > apiVersion: v1 > clusters: [] > contexts: [] > current-context: test > kind: Config > preferences: {} > users: [] > ``` Newly added things go in the first file specified: ```sh kubectl config set-credentials user2 --token=token2 cat current.yaml
apiVersion: v1 clusters: [] contexts: [] current-context: test kind: Config preferences: {} users: - name: user2 user: token: token2
Hmm.... thats very close indeed. Thanks for the pointers. :)
It looks like in order to get this to work, I still need to copy a small context into the users home directory so they can edit it successfully? Otherwise though, it will all work I think.
Thanks!
@liggitt Tried it, but it fails. Issue raised here: kubernetes/kubectl#708
New to kubernetes.... So should we be using kubectl or kubectx now?
https://ahmet.im/blog/kubectx/
The ability to choose context based on an environment variable would be incredibly helpful.
I juggle a couple of dozen clusters, some of them need to interact with each other. Most of my automation tooling expects a single kubeconfig file to contain clusters to work with, so splitting my kubeconfig into 25+ configs (some that come or go over time) is a major hassle. To perform actions on, say, ClusterAlpha in terminal1, and ClusterBeta in terminal2 simply doesn't work, as the current context is specified within the file.
If there's a better way of achieving this sort of workflow, I'd be very interested in hearing it.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
I had similar problems. That's why I wrote: https://github.com/sbueringer/kubectx
Mostly build for my personal use cases, but maybe it helps.
I see some similar solutions in this thread, but this one allows you to still use kubectl
as your command, instead of aliasing it to some shortcut. Its nearly identical to others except the sub-shell call doesn't end up calling the alias again, rather the kubectl
binary directly:
alias kubectl="kubectl --context \${KUBE_CONTEXT:-\$(command kubectl config current-context)}"
$ alias kubectl="kubectl --context \${KUBE_CONTEXT:-\$(command kubectl config current-context)}"
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 0d14h v1.16.2
$ KUBE_CONTEXT=nonprod
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nonprod-core-nodes-xxx Ready <none> 2d4h v1.12.8-gke.7
Did anyone write a Kubernetes Enhancement Proposal (KEP) for this?
Done, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#merging-kubeconfig-files :)
One of the problems with this is that many tools that we (developers) use (e.g. minikube, minishift, ...) write directly to the first file in the KUBECONFIG env no matter if that value already exited in some of the files in the KUBECONFIG list as specified by @liggitt so, not helpful in many scenarios.
The K8S ecosystem really needs a clean, flexible way to manage and switch between contexts for multiple clusters/users.
@peterloron have a look at https://github.com/ahmetb/kubectx
kubectx does not allow for multiple environments. For example:
Terminal A in context A, Terminal B in context B.
When working with a lot of clusters or namespaces, this can cause issues where commands are accidentally ran in the wrong place.
@kfox1111 Yup, that's exactly why I've written my own version of it which does that :)
https://github.com/sbueringer/kubectx
i also wrote my own solution to this problem, which i call kckn. it's two small, out-of-the-way shell scripts which enable you to use per-shell kube contexts, while letting you manage your users and contexts in your base kubeconfig. kckn doesn't ever modify your base kubeconfig, and it should be compatible with any tool that supports the KUBECONFIG
. env var.
my solution here
add the following to your shell rc file (~/.bashrc or ~/.zshrc, etc.)
function kctx() {
BASE="kubectl --context=$1"
# the following line is for iTerm2 badge (https://www.iterm2.com/documentation-badges.html), comment out if you don't need it
# using iTerm badge adds a nice visual hint for the current context and makes possible quick tab switching with Command+Shift+O
printf "\e]1337;SetBadgeFormat=%s\a" $(echo -n "$1" | base64)
alias k="$BASE"
alias kg="$BASE get"
alias ke="$BASE edit"
alias kgp="$BASE get pod"
alias kde="$BASE describe"
}
I think @liggitt provided the best solution yet, since it works with standard tools. The only problem is that he provided so many examples that the main point may have been obstructed. All you need to have is a setup of minimal config files to define the different contexts and an easy way to switch.
There is one thing required though to make this practical: the context defining config files should be read-only, so that they can't be accidentally overwritten by tools like kubectx
, which would in turn lead to never ending confusions.
To set these files up I suggest something like
kubectl config view -o jsonpath='{.contexts[*].name}' | while read context do conf=$HOME/.kube/$context-context cat >$conf <<CONF apiVersion: v1 kind: Config current-context: $context CONF chmod -w $conf done
Then you can put this into your ~/.bashrc
to get an alias kctx
for easy switching, including tab completion.
export KUBECONFIG alias kctx='printf -v KUBECONFIG "$HOME/.kube/%s-context:$HOME/.kube/config"' __kctx () { COMPREPLY=($(compgen -W "$(basename -s -context $HOME/.kube/*-context)" "${COMP_WORDS[1]}")); } complete -F __kctx kctx
This still works nicely together with tools like kubectx
. As long as you don't invoke this alias the standard configuration applies, and with it the (global) default context. After the first invocation of kctx
another call of kubectx
would fail, because it can't overwrite the first config file listed in KUBECONFIG
. I find this a rather elegant solution to resolve the conflict between these usecases (i.e. global default context vs. shell session specific contexts).
The lack of env var loading interacts somewhat badly with the plugin interface restriction on having flags before the plugin name: https://github.com/kubernetes/kubernetes/pull/92343/files
Currently, it is not possible to mix plugins with aliases that set global opts; being able to set context with env vars instead would work around it. Using sh functions instead of aliases is possible, but the lack of variable closure in functions (for most shells) makes functions more of a headache than aliases. (still doable, just... unclean and slower: eval "$aliasname() {kubectl \"\$@\" --context \"$aliascontext\"}"
This would be very useful for me too...
In my mind the usecase is covered. It's just that instead of additional KUBECTL_CONTEXT
and KUBECTL_NAMESPACE
env vars, only KUBECONFIG
is adhered, but that is flexible enough since it allows to combine configs. E.g. if you have a namespace nsname
and a context ctxname
your can set up appropriate nsname-ns
and ctxname-context
configs and set KUBECONFIG
to something like
/home/myuser/.kube/nsname-ns:/home/myuser/.kube/ctxname-context:/home/myuser/kube/config
I describe above how to automate this and to make it more failure proof. That should also work seamlessly with plugins, since KUBECONFIG
is always adhered.
This works for me in Bash 5 and doesn't break if the environment variable isn't set:
export KUBECTL_CONTEXT="--context=ProdCluster"
alias kubectl='kubectl ${KUBECTL_CONTEXT}'
Now I can actually compare our 4 clusters from different terminals at the same time.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
[...] you can put this into your
~/.bashrc
to get an aliaskctx
for easy switching, including tab completion.
[...]
I realized that this alias may not be the most practical solution for many since it requires running the preparation script to get the cluster specific configs in place (and if you get new cluster configurations you'd need to rerun it). Therefore I put this together now in a self contained script which creates the necessary files transparently and also avoids cluttering ~/.kube
: https://gist.github.com/bitti/183771a7308b030d933dbe4ea9c5cc9f. It doesn't support namespaces yet, but that could be easily added if there is a need.
During my research however I found out that there are similar ready-to-use solutions available, i.e. https://github.com/aabouzaid/kubech and https://github.com/sbstp/kubie. Kubie does many other things as well though.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
So, what is the currently recommended way to control different clusters in different terminal windows?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@ahmetb's kubectx
isn't supporting it yet, but he is thinking about it: ahmetb/kubectx#12 (comment). All solutions work by setting the KUBECONFIG
to a separate config, what the tools help with is doing this transparently so that you don't have to do this manually.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
So, what is the currently recommended way to control different clusters in different terminal windows?
* https://github.com/ahmetb/kubectx ? * https://github.com/sbueringer/kubectx ? * Having separate config files and choosing them by setting KUBECONFIG to their full path?
I'm using direnv in combination with "KUBECONFIG" - just switch to specific folder/subfolder.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
I don't know direnv, how does it work? You switch to a different directory and then call direnv to load a .env file or something?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@AndreKR direnv changes your shell environment based on the directory you're in. So you can define different settings for KUBECONFIG for different working directories, which then get activated as soon as you cd into those. Great for stuff that you're regularly working on, but not made for ad-hoc changes to KUBECONFIG.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
What do you mean it "changes my shell environment"? It's a replacement for cmd.exe?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@AndreKR, @bitti I've been using a shell script for over a year as a wrapper around kubectx but that had too many cumbersome manual steps that I had to deal with when I changed jobs.
So I wrote a program that does it really well!
Check out kubesess
.
It handles a per shell config using $KUBECONFIG.
I'm still using a shell wrapper but the program is designed to work with it.
Would love some feedback if there are features needed or bugs present.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Another option here is to use Lens - the terminal window is automatically configured with the appropriate context.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Another option here is to use Lens - the terminal window is automatically configured with the appropriate context.
I just tried that, but I ran into a bit of a catch-22. I can only get to the terminal window if I can connect to the cluster, but I cannot connect to the cluster without a terminal window, because it's an EKS cluster and I need to set the AWS credentials.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Still have to find a solution for this that is not awkward.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@AndreKR direnv changes your shell environment based on the directory you're in. So you can define different settings for KUBECONFIG for different working directories, which then get activated as soon as you cd into those. Great for stuff that you're regularly working on, but not made for ad-hoc changes to KUBECONFIG.
Thank you for the hint. Direnv works fine.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@AndreKR, @bitti I've been using a shell script for over a year as a wrapper around kubectx but that had too many cumbersome manual steps that I had to deal with when I changed jobs.
So I wrote a program that does it really well!
I find it astonishing that this issue hasn't gotten any real attention in 9 years. It would be a really simple, and reliable fix for all the kubectl
and other to use a KUBE_CONTEXT
environment variable to have per-shell kube-tool context switching. It would just work so much cleaner if it worked that way.
@AndreKR's kubesess seems like one of the best options to do this using the path'd KUBECONFIG to have the context specified in the first (cached) config. Works with direnv setups etc reliably.
I know devs are overwhelmed, but when there is so much interest in fixing this "at the source" and the fix is trivial...
I've read some of the other threads on this topic, and while I agree there needs to be consistency across the ecosystem, not introducing this just because other tools might not be aware holds everyone back. When the multi-config KUBECONFIG was introduced, I'm sure the same transition needed to happen.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
I use direnv. For every environment I usually have a directory and a .envrc
file. cd some-env
is enough to set the environment (kubectl config is just one of many things). This works fine.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.