Kubeconfig File Locking

256 views
Skip to first unread message

Ross Peoples

unread,
Feb 9, 2022, 1:06:04 PM2/9/22
to d...@kubernetes.io, kubernete...@googlegroups.com
Hello all,

During the latest SIG-CLI meeting, we discussed the following PR:

One of the issues brought up during discussions was the introduction of a new dependency in client-go. On the call, we determined that we could easily reimplement that functionality without the additional dependencies.

However, the question was also asked if we wanted to remove file locking entirely, since the general guidance is to not make concurrent changes to kubeconfig. The goal here is to get some feedback from the community on whether we should continue to support file locking via a more robust and atomic mechanism, or should we phase out file locking kubeconfig entirely.

Please let me know your thoughts. Thanks,

Ross

Davanum Srinivas

unread,
Feb 9, 2022, 4:59:08 PM2/9/22
to rpeo...@redhat.com, d...@kubernetes.io, kubernete...@googlegroups.com
+1 to phase out file locking!

--
You received this message because you are subscribed to the Google Groups "dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@kubernetes.io.
To view this discussion on the web visit https://groups.google.com/a/kubernetes.io/d/msgid/dev/CAPvcVKZdDxqJAPBdmjm9qN%2BUWckHPTmsuUiJEKFUmo4YcdkZyg%40mail.gmail.com.


--
Davanum Srinivas :: https://twitter.com/dims

Benjamin Elder

unread,
Feb 9, 2022, 5:51:09 PM2/9/22
to Davanum Srinivas, rpeo...@redhat.com, d...@kubernetes.io, kubernetes-sig-cli
FWIW, I think it would be easier to phase out locking if we had better support for multiple kubeconfig files and split out the current context (https://github.com/kubernetes/kubectl/issues/569#issuecomment-1033123302), but I don't think there's any progress on this. 

Right now, I'm not sure how software like sigs.k8s.io/kind should prevent merging bugs without locking. 
We could just write to many distinct files ourselves, but users expect clusters to "just work" and be configured without setting `KUBECONFIG=foo:bar:baz` and a number of deployment tools are writing to kubeconfig with client-go today.

Jason DeTiberus

unread,
Feb 9, 2022, 5:55:28 PM2/9/22
to dev, Benjamin Elder, rpeo...@redhat.com, d...@kubernetes.io, kubernetes-sig-cli, dav...@gmail.com
To echo what Ben said, there is a lot of tooling out there that interacts with the '/etc/kubernetes/admin.kubeconfig', from home grown terraform/ansible scripts to various other projects and products that do k8s cluster lifecycle management.

I think prematurely removing support for the file locking without having some type of a deprecation period and alternative for users that are relying on it for avoiding external tooling (and local users) from stepping on each other.

--
Jason DeTiberus

Taahir Ahmed

unread,
Feb 9, 2022, 6:02:35 PM2/9/22
to jdeti...@packet.com, dev, Benjamin Elder, rpeo...@redhat.com, kubernetes-sig-cli, dav...@gmail.com
However, the question was also asked if we wanted to remove file locking entirely, since the general guidance is to not make concurrent changes to kubeconfig.

How are users supposed to prevent themselves from concurrently modifying kubeconfig, in the general case?  Many tools will modify it as a side-effect. 

Ken Sipe

unread,
Feb 9, 2022, 9:02:23 PM2/9/22
to rpeo...@redhat.com, d...@kubernetes.io, kubernete...@googlegroups.com
It is great to get community feedback!  Would it also make sense to gather data regarding projects that leverage client-go and/or kube config file?   I would expect there are a significant number of projects that have expectations on kubeconfig.  The lock isn’t just about in-project protection.  It also is cross tool collaboration feature.   Perhaps we could get feedback from CNCF projects (which may not find this email thread).

I am for a platform specific solution.  However it is useful to note that there is a Golang design proposal to expose this functional from Go (currently buried under /internal).  Details were posted on the GH issue noted at the start of this email thread.

Thanks for driving this!
Ken 


Reply all
Reply to author
Forward
0 new messages