Dogfood kustomize by either:
- moving one or more of our own (OSS Kubernetes) services to it.
- getting user feedback from one or more mid or large application deployments using kustomize.
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cli/CY4PR21MB050411DD64C2D38B35A74E82DBB20%40CY4PR21MB0504.namprd21.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAPzFYDv_nZvGvqeWSs_FF%2B5DkKTysz%2BgAtpR_EPWNn1cKArk3Q%40mail.gmail.com.
- Dogfood kustomize by either:
- moving one or more of our own (OSS Kubernetes) services to it.
- getting user feedback from one or more mid or large application deployments using kustomize.
- Publish kustomize as a subcommand of kubectl.
Hi Matt,I was saddened to read your email and sorry that this PR has come as a surprise to you or anyone else. I'd prefer not to continue this discussion on New Year's Eve, but do want to briefly address some of the concerns in your email. I'll hold off on sending my detailed response to your email until tomorrow.Following is my understanding of the process we followed. Do correct me on anything where I have gotten a detail wrong.
- The PR for this change was sent Nov 7 and discussed on the PR for over a month before it was merged - this should have been plenty of time for discussion on the PR itself.
- It was reviewed both by (non-kustomize contributor) SIG CLI maintainers such as Maciej (@soltysh) and by respected community members that contribute outside SIG CLI (non-kustomize) such as Jordan (@liggitt) and Clayton (@smarterclayton) - it wasn't done by "friends" in a back room.
- The PR was merged December 19th after having addressed the feedback given during its month long. This was the Tuesday the week before the typical Holiday break.
- The PR was cc'ed to the sig-cli mailing list on Nov 15 with the title "PR for enabling kustomize in kubectl" - a full month before it was merged - so that it didn't get lost in GitHub notification filters - so anyone subscribed to the sig cli mailing list should have had ample opportunity to weigh in.
- The PR was discussed at every sig-cli meeting taking place in November and December and recorded in the notes for anyone to look at.
- Deviations from what was proposed in the KEP were to address feedback from the PR review feedback.
- The KEP proposal for this was merged in May (7 months ago). The most vocal skeptic of kustomize (Joe @jbeda) was directly solicited as an approver for this KEP to make sure their voice could be heard and the approach was reasonable.
- Kustomize being part of kubectl was mentioned at at least 1 community meeting (I forget which one).
- The motivation for kustomize was to address user filed issues with using declarative config and kubectl apply. The fixes implemented in kustomize have also been implemented independently in other ecosystem tools such as Spinnaker, Kubepack and (as I understand it) Pulumi.
- kustomize was developed independently as an experiment, but designed to be integrated natively into kubectl
- Is the kustomize functionality meant for all the tools that operate over a collection of Kubernetes objects on disk or just kubectl? kubectl is not the only tool that operates on collections of objects. I couldn't find this being documented. If it's not documented than the 99%+ of people who aren't looped in will have to come to their own conclusions. Because this came out of SIG CLI (a piece of insider knowledge I know) I assume it's just kubectl.
- If I'm writing a tool in another language, such as python, what do I need to do to replicate the kustomize functionality over there?
- What do I need to know about cli-runtime when using it in my own custom tools when going forward because of this? Assume a lot of devs don't have the time or desire to read all the code but just want to know the quick and dirty of the API.
Assuming this is just meant for kubectl and not others, this means we now have an inconsistent experience among tools that operate on collections of kubernetes objects. For example, a custom tool built using the python client will have a different output of the objects on disk from kubectl. What do we think of that experience? How do we intend the experience to be for end users who use many ecosystem tools?
Would you be will to come to sig cli meeting to talk about cli-runtime and plugins?
I would offer that IMHO kubectl is less mature than the rest of Kubernetes (e.g. only just getting a function diff command, prior to kustomize - no sensible way of applying secrets, pruning is unsafe, etc), and making quality #1 also means addressing some of fundamental gaps in functionality.
I'd like to know more about your use cases for sig-runtime, and having a tighter feedback loop with its consumers would be fantastic.
Also glad to discuss graduation criteria at sig architecture. Kubectl is probably different from the APIs, as it doesn't have API versioning like Kubernetes APIs do - so it might need its own definition of alpha / beta / ga.
The meaning of graduation here was not well communicated in the KEP, and meant including the experiment in kubectl. I am curious to know more about the process for getting customer feedback on alpha APIs before enabling them by default. Do we require alpha APIs to have seen significant adoption before making them beta?
Maciej,
I do have a bunch of practical questions from my other email. If you have details on how y'all planned to address them I would appreciate knowing...
- Is the kustomize functionality meant for all the tools that operate over a collection of Kubernetes objects on disk or just kubectl? kubectl is not the only tool that operates on collections of objects. I couldn't find this being documented. If it's not documented than the 99%+ of people who aren't looped in will have to come to their own conclusions. Because this came out of SIG CLI (a piece of insider knowledge I know) I assume it's just kubectl.
- If I'm writing a tool in another language, such as python, what do I need to do to replicate the kustomize functionality over there?
- What do I need to know about cli-runtime when using it in my own custom tools when going forward because of this? Assume a lot of devs don't have the time or desire to read all the code but just want to know the quick and dirty of the API.
There was also an experience question you may have feedback on...Assuming this is just meant for kubectl and not others, this means we now have an inconsistent experience among tools that operate on collections of kubernetes objects. For example, a custom tool built using the python client will have a different output of the objects on disk from kubectl. What do we think of that experience? How do we intend the experience to be for end users who use many ecosystem tools?Any details on these would be helpful.
Certainly. Did you want to have this discussion in the meeting on the 16th?
1) How do you define maturity and relate it to the rest of Kubernetes? My guess is that this is how we feel about it. Is kubectl mature enough to be production worthy? Of course.
2) "addressing some of fundamental gaps in functionality" ... what gaps in nationality should be filled by kubectl and what gaps should be filled by the ecosystem? If kubectl fills a gap isn't that picking a winner? We don't want to pick too many process winners when so many people use different processes, right?
Kubernetes has been moving to the approach of making itself extensible rather than doing all the things or picking the ones that should be in. For example, that means moving to CRDs and custom controllers rather than new built-in objects and core controllers. Why wouldn't kubectl follow the same conceptual model?
3) When it comes to secrets specifically, many have solved this already in the ecosystem. Is this saying that those methods weren't sensible? I bet that's not what this was saying. Were those ecosystem methods surveyed? It can be difficult to get outside our own heads and methods to look at what others are doing and why.
For example, I've worked on projects where devs and even operators, without special privileges, did not have access to secret information. Instead automation kept it stored in an encrypted manner and injected it at the right points. This was done in the name of security. If devs and ops could not access the information they could not leak it. How can tools like this work in a flow with kubectl or even a declarative method? Some would say the only sensible way to secure secrets is to make sure people cannot access them.
While I can't remember what we did with the workloads API to go from alpha to beta I do remember that for beta to GA the betas were being used widely for production and well tested. People were treating them as production quality and generally happy with that and they were stable enough.
I am of the opinion that building a solution and picking a winner may be different things.
Matt do you have a reference to the motivation for extensibility?
For example, I've worked on projects where devs and even operators, without special privileges, did not have access to secret information. Instead automation kept it stored in an encrypted manner and injected it at the right points. This was done in the name of security. If devs and ops could not access the information they could not leak it. How can tools like this work in a flow with kubectl or even a declarative method? Some would say the only sensible way to secure secrets is to make sure people cannot access them.One possibility is a GitOps driven workflow where kubectl was run by a bot. In that case the devs and operators wouldn't need write permission to the cluster at all.
I think it's worth separating out a few different things...
- For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
- kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
- Then there are the improvements to kubectl and cli-runtime for those who are consuming it. This is squarely a SIG CLI thing and something I chatted with Phil about in a call today. I look forward to talking more with SIG CLI about that and, hopefully, looping in other consumers to give their feedback.
I am of the opinion that building a solution and picking a winner may be different things.Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.What solution is kubectl aiming to be?Matt do you have a reference to the motivation for extensibility?Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.
- For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
- kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
- Then there are the improvements to kubectl and cli-runtime for those who are consuming it. This is squarely a SIG CLI thing and something I chatted with Phil about in a call today. I look forward to talking more with SIG CLI about that and, hopefully, looping in other consumers to give their feedback.
I am of the opinion that building a solution and picking a winner may be different things.Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.
...
Matt do you have a reference to the motivation for extensibility?Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.
Yes, Brian wrote a comprehensive document exploring the ecosystem.I'm familiar with this work from Brian. In reference to this conversation, it holds opinions (seen right in the name) on how to do things. When that was being discussed it was apparent that not everyone shared those opinions. How many of whose opinions do we bake in going forward and how many of these opinions to we leave to the ecosystem?
This is a high level existential question. For Kubernetes we seem to be deciding that by saying new features happen via extensions. Many things have been told to go the CRD/controller route recently and things being backed in are exceptions to this with reasons (e.g., additions to existing core objects). Why doesn't this apply to other parts of the Kubernetes project?
...
One possibility is a GitOps driven workflow where kubectl was run by a bot. In that case the devs and operators wouldn't need write permission to the cluster at all.
Why would the bot use kubectl at all? Why wouldn't it just talk with the Kubernetes API directly using a client?
I've chatted with SIG Service Catalog and the reason they went with their own CLI back then was because kubectl and k8sdid not allow them to express what they needed back then. Now, that we have both server-side printing and plugins both ofthe problems they've raised are solved and from what I've been talking with Carolyn during KubeCon they'll be slowly movingto plugins with svcat. Just to clarify ;-)
- For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
I look forward to chatting about this. What this should look like as kubectl moves out of kubernetes/kubernetes and on to a separate release cycle I think will be interesting.
- kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
Note: kubectl has always been opinionated about how it invokes APIs and provides functionality for manipulating + generating resource config. (See more on this later). This is still the case, it is just a bit more capable in how it is able to manipulate resource config and generate resources.
I am of the opinion that building a solution and picking a winner may be different things.Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.1. This same argument could be made against all of the workload APIs - Deployments , DaemonSets, StatefulSets, CronJobs - are all "opinions" around how to create Pods and we "picked a winner" when we implemented them. ReplicaSet could probably be seen as the "un-opinionated" approach for creating Pods. Providing powerful abstractions in the APIs was the right decision, and we shouldn't be afraid of providing powerful abstractions in our tooling for working with the APIs.
2. I am also curious where you got the notion that kubectl doesn't have opinions - by and large most kubectl commands are opinionated and this is what differentiates it from a simple CRUD client - kubectl get is opinionated about how it displays objects, kubectl describe is opinionated about how it displays objects, kubectl logs is opinionated about how it allows users to query objects, kubectl edit is an opinionated way of updating an object, kubectl apply is another opinionated way of updating objects, etc.
3. FWIW: We've talked about breaking out a subset of kubectl into another tool that is more focussed on only printing and fetching resources. Is this something you are interested in?
...Matt do you have a reference to the motivation for extensibility?Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.This supports my early position that kubectl should be focussed on supporting APIs written as extensions (e.g. CRDs) rather than on implementing kubectl itself through a plugin mechanism. This has already been a large part of our focus - e.g. server-side printing, data-driven commands and plugins are all examples of this; as is work being done to fix built-in commands that don't work with extensions APIs (e.g. kubectl rollout status). That the Service Catalog issues have already been addressed speaks to SIG CLI having the right focus for kubectl's priorities.
Yes, Brian wrote a comprehensive document exploring the ecosystem.I'm familiar with this work from Brian. In reference to this conversation, it holds opinions (seen right in the name) on how to do things. When that was being discussed it was apparent that not everyone shared those opinions. How many of whose opinions do we bake in going forward and how many of these opinions to we leave to the ecosystem?I am not quite sure how to quantify an answer to this question. As I noted earlier - philosophically, baking in opinions is consistent with how kubectl has been developed and the opinions we bake into our APIs are a large part of their value.
This is a high level existential question. For Kubernetes we seem to be deciding that by saying new features happen via extensions. Many things have been told to go the CRD/controller route recently and things being backed in are exceptions to this with reasons (e.g., additions to existing core objects). Why doesn't this apply to other parts of the Kubernetes project?I am not sure I understand your question. Would you explain what you mean by this "Why doesn't this apply to other parts of the Kubernetes project?"?
One possibility is a GitOps driven workflow where kubectl was run by a bot. In that case the devs and operators wouldn't need write permission to the cluster at all.Why would the bot use kubectl at all? Why wouldn't it just talk with the Kubernetes API directly using a client?
I don't have a philosophical issue with using the client directly, but it probably wouldn't work well in a workflow where the declared state was checked into some version controller system and pushed to the cluster - e.g. today updating resources from declarative config without apply is non-trivial. Server-side apply may alleviate some but not all of the issues here.
Hi SIG CLI folks. I see the kustomize landed in kubectl.From this I have three comments/questions I was hoping you could help me with...
2) How did the graduation criteria get decided and how were they leveraged? I looked at KEP 8 in addition to the KEP for this part. But, I have questions about these things.
For example, a graduation criteria "Publish kustomize as a subcommand of kubectl" and the KEP says it was implemented. But, it doesn't appear this happened.
And, anything that implements the cli-runtime package now needs handle kustomize situations. How do I go about doing that in my own applications that implement the cli-runtime? I couldn't find the docs for this.
For reference, in SIG Architecture we just went through a kerfuffle when dealing with windows GA. Things like testing and docs were a big deal. Out of that we want to be slow to add new features and be very clear on graduation criteria including many common things that are on the list of things to figure out. This is why I'm asking about how graduation criteria were chosen and followed through on. To help inform that process.
3) How was the usage graduation criteria chosen for kustomize? For reference, it is documented as:Dogfood kustomize by either:
- moving one or more of our own (OSS Kubernetes) services to it.
- getting user feedback from one or more mid or large application deployments using kustomize.
From this I could see that the bar is low enough we could shift one experimental k8s project or one non-production mid application to it as enough. That seems fairly low to me. Is there any discussion on how this level was chosen?
I'm curios because in previous meetings I was there (and I believe so was Joe Beda) and we had talked about the market deciding what is popular and useful to be merged into the core of Kubernetes for features. This is a different direction from those conversations and I am curious how this criteria came about.
4) As someone who will need to implement tools that deal with this via cli-runtime and am not just reading directories from the filesystem, where is it documented what I need to do? I imagine I am not the only one who will have this question.Note, these 4 items are either to clean up the KEP per the process, questions to inform the GA conversation we're having in SIG Arch, or to help me as an implementer. Please don't feel the need to defend your position in your response to any of these. My only goal is practical moving forward.I do have an observation as well...If it has not been communicated, k8s is about adding extension points rather than new features. More often than not, if there is an extension point then the direction is to use that and not to add the feature to k8s. Instead, the goal has been to add clean extension points to enable the ecosystem.
This appears to go against that and I have not been able to find a documented traceable justification. If one exists I would be curios to read it. Something other than one or a small groups opinion.
For example, to follow the extension points concept I might have implemented an internal events mechanism allowing an external plugin to intercept events and take action. kustomize could have been one of many plugins to implement that and this would have then enabled the ecosystem.Because of the way this was approached and the timing (merged over the holidays), people have had two reactions we should take note of:
- They are upset. One thing was prioritized over competing things
- and without a nice documented reason.
- The kind of upset coupled with the next point can breed bad behavior
- The feeling that this was a back room deal (e.g., the KEP process was not followed through on but "friends" merged things in a holiday season) and that this is the way others should do things, too
This second option is not one I thought of on my own. Someone said it to me and that concerns me.
If you've made it this far, thanks for taking the time to read all these words. My concern here is simply process, culture, and keeping a system that can be healthy and long lived. Sometimes that means thinking things through and talking about hard uncomfortable topics.- Matt
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cli/CAMPAG2pGkmWN361Ch0Rc%3DLpFpxDMxqEyXBymWkZLBU7dGBVLyQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAO_RewY-RykFFuqW5j9XSinYHxsjCU0yQMzD5YuObrsVPs0rWQ%40mail.gmail.com.
The API is part
of the project. Sadly, arguments about Storage and Networking are not
likely to hold in parallel for CLIs. :)
I believe there is a subtext here worth pointing out. Isn't this argument really about helm (i.e declarative vs. imperative management)?
On Mon, Dec 31, 2018 at 12:23 PM Matt Farina <matt....@gmail.com> wrote:Hi SIG CLI folks. I see the kustomize landed in kubectl.From this I have three comments/questions I was hoping you could help me with...Thanks for taking a look.Yes, that should be fixed. It was missed during review. A KEP linting tool would be useful.It would also be useful to update the Implementation History section.2) How did the graduation criteria get decided and how were they leveraged? I looked at KEP 8 in addition to the KEP for this part. But, I have questions about these things.Actually, I'm not sure what "graduation" was intended to mean in this context. I think this KEP was responding to the questions in the template, such as "How will we know that this has succeeded?," rather than "graduation" in the sense of API/implementation maturity. The KEP template probably needs to be clarified.SIG CLI folks: Is the expected kubectl feature lifecycle documented anywhere? Some features, such as server-side printing, the client-side apply overhaul, and diff, have started as alpha subcommands and/or alpha-prefixed flags. kustomize started as a separate command in 2017 (before plugins existed?).
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cli/CAKCBhs4kGHeoRNoEMoFadxZg7yt-O5B-_x%2Bn_MwKeS3pKgOkFg%40mail.gmail.com.