kustomize in kubectl

824 views
Skip to first unread message

Matt Farina

unread,
Dec 31, 2018, 3:23:49 PM12/31/18
to kubernetes-sig-architecture, kubernetes-sig-cli
Hi SIG CLI folks.  I see the kustomize landed in kubectl.

From this I have three comments/questions I was hoping you could help me with...

1) Can someone please update the KEP for this. In doing so can you please update the metadata to follow the proper values (details here). For example, pending is not a valid status

2) How did the graduation criteria get decided and how were they leveraged? I looked at KEP 8 in addition to the KEP for this part. But, I have questions about these things.

For example, a graduation criteria "Publish kustomize as a subcommand of kubectl" and the KEP says it was implemented. But, it doesn't appear this happened.

And, anything that implements the cli-runtime package now needs handle kustomize situations. How do I go about doing that in my own applications that implement the cli-runtime? I couldn't find the docs for this.

For reference, in SIG Architecture we just went through a kerfuffle when dealing with windows GA. Things like testing and docs were a big deal. Out of that we want to be slow to add new features and be very clear on graduation criteria including many common things that are on the list of things to figure out. This is why I'm asking about how graduation criteria were chosen and followed through on. To help inform that process.

3) How was the usage graduation criteria chosen for kustomize? For reference, it is documented as:

Dogfood kustomize by either:
  • moving one or more of our own (OSS Kubernetes) services to it.
  • getting user feedback from one or more mid or large application deployments using kustomize.
From this I could see that the bar is low enough we could shift one experimental k8s project or one non-production mid application to it as enough. That seems fairly low to me. Is there any discussion on how this level was chosen?

I'm curios because in previous meetings I was there (and I believe so was Joe Beda) and we had talked about the market deciding what is popular and useful to be merged into the core of Kubernetes for features. This is a different direction from those conversations and I am curious how this criteria came about.

4) As someone who will need to implement tools that deal with this via cli-runtime and am not just reading directories from the filesystem, where is it documented what I need to do? I imagine I am not the only one who will have this question.

Note, these 4 items are either to clean up the KEP per the process, questions to inform the GA conversation we're having in SIG Arch, or to help me as an implementer. Please don't feel the need to defend your position in your response to any of these. My only goal is practical moving forward.

I do have an observation as well...

If it has not been communicated, k8s is about adding extension points rather than new features. More often than not, if there is an extension point then the direction is to use that and not to add the feature to k8s. Instead, the goal has been to add clean extension points to enable the ecosystem.

This appears to go against that and I have not been able to find a documented traceable justification. If one exists I would be curios to read it. Something other than one or a small groups opinion.

For example, to follow the extension points concept I might have implemented an internal events mechanism allowing an external plugin to intercept events and take action. kustomize could have been one of many plugins to implement that and this would have then enabled the ecosystem.

Because of the way this was approached and the timing (merged over the holidays), people have had two reactions we should take note of:
  1. They are upset. One thing was prioritized over competing things and without a nice documented reason. The kind of upset coupled with the next point can breed bad behavior
  2. The feeling that this was a back room deal (e.g., the KEP process was not followed through on but "friends" merged things in a holiday season) and that this is the way others should do things, too
This second option is not one I thought of on my own. Someone said it to me and that concerns me.

If you've made it this far, thanks for taking the time to read all these words. My concern here is simply process, culture, and keeping a system that can be healthy and long lived. Sometimes that means thinking things through and talking about hard uncomfortable topics.

- Matt

Brendan Burns

unread,
Dec 31, 2018, 5:54:04 PM12/31/18
to kubernetes-sig-architecture, kubernetes-sig-cli, Matt Farina
Matt,
Many thanks for starting this thread. I think it's very important for the health of the project that we apply the same standards to all pieces, whether they are things like the windows GA or features in kubectl. So getting answers to the process questions you asked are crucial for ensuring transparency and consistency throughout the project.

Additionally, given that kubectl already has a built in plugin system I'm curious why the existing plugin mechanism wasn't used. There's a brief note in the KEP indicating that people thought that it would require multiple commands linked together with a pipe but I don't think that's true. And if it really is the case that it's not possible it seems like the plugin mechanism in kubectl is flawed.

This seems like an important discussion to have in the sig-arch meeting, can we get it in the schedule sometime soon?

Thanks! (And happy New year!)

--brendan


From: kubernetes-si...@googlegroups.com <kubernetes-si...@googlegroups.com> on behalf of Matt Farina <matt....@gmail.com>
Sent: Monday, December 31, 2018 12:23:36 PM
To: kubernetes-sig-architecture; kubernetes-sig-cli
Subject: kustomize in kubectl
 
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAMPAG2pGkmWN361Ch0Rc%3DLpFpxDMxqEyXBymWkZLBU7dGBVLyQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Phillip Wittrock

unread,
Dec 31, 2018, 11:59:53 PM12/31/18
to Brendan Burns, kubernetes-sig-architecture, kubernetes-sig-cli, Matt Farina
(Sorry for the repost Matt and sig-cli - sig-architecture bounced me :P)

Hi Matt,

I was saddened to read your email and sorry that this PR has come as a surprise to you or anyone else.  I'd prefer not to continue this discussion on New Year's Eve, but do want to briefly address some of the concerns in your email.  I'll hold off on sending my detailed response to your email until tomorrow.

Following is my understanding of the process we followed.  Do correct me on anything where I have gotten a detail wrong.
  • The PR for this change was sent Nov 7 and discussed on the PR for over a month before it was merged - this should have been plenty of time for discussion on the PR itself.
  • It was reviewed both by (non-kustomize contributor) SIG CLI maintainers such as Maciej (@soltysh) and by respected community members that contribute outside SIG CLI (non-kustomize) such as Jordan (@liggitt) and Clayton (@smarterclayton) - it wasn't done by "friends" in a back room.
  • The PR was merged December 19th after having addressed the feedback given during its month long.  This was the Tuesday the week before the typical Holiday break.
  • The PR was cc'ed to the sig-cli mailing list on Nov 15 with the title "PR for enabling kustomize in kubectl" - a full month before it was merged - so that it didn't get lost in GitHub notification filters - so anyone subscribed to the sig cli mailing list should have had ample opportunity to weigh in.
  • The PR was discussed at every sig-cli meeting taking place in November and December and recorded in the notes for anyone to look at.
  • Deviations from what was proposed in the KEP were to address feedback from the PR review feedback.
  • The KEP proposal for this was merged in May (7 months ago).  The most vocal skeptic of kustomize (Joe @jbeda) was directly solicited as an approver for this KEP to make sure their voice could be heard and the approach was reasonable.
  • Kustomize being part of kubectl was mentioned at at least 1 community meeting (I forget which one).
  • The motivation for kustomize was to address user filed issues with using declarative config and kubectl apply.  The fixes implemented in kustomize have also been implemented independently in other ecosystem tools such as Spinnaker, Kubepack and (as I understand it) Pulumi.
  • kustomize was developed independently as an experiment, but designed to be integrated natively into kubectl
I would not be surprised if some additional step as part of some formal process was missed by accident, but every attempt was made to do this in the open and build consensus.

Summary:
  • sig-cli / kubectl apply maintainers came up with a solution to address various issues with kubectl apply (e.g. try rolling out a secret with kubectl apply - it doesn't work well)
  • this solution was built as an experiment in another repo with the intent of integrating it into apply after receiving some adoption and validation of the approach
  • 6 months of the experiment started, a KEP defining criteria to integrate it into kubectl was sent for review and approved.
  • more than a year after the start of the experiment a PR was sent to perform the integration
  • more than a month of the PR was sent it was merged
When kustomize was started the new architectural vision for Kubernetes was to build the project across multiple repos - and so the best effort was made to develop in that fashion so that the approach could be fully vetted before being integrated as part of the core.  It seems this is now causing confusion to some folks.

- Phil

You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cli/CY4PR21MB050411DD64C2D38B35A74E82DBB20%40CY4PR21MB0504.namprd21.prod.outlook.com.

Gerred Dillon

unread,
Jan 1, 2019, 12:14:45 AM1/1/19
to Phillip Wittrock, Brendan Burns, kubernetes-sig-architecture, kubernetes-sig-cli, Matt Farina
Hi all,

Sorry in advance to try to hook into this, but this seems like a good overlap between topicality and stakeholders.

In recognition and anticipation with what's happening with this thread and kustomize, I'd like to add some third-party opinion.

1. I and a few others are happening to be developing tools that use the same packages the kustomize cli tool uses. See:


2. I'm excited about the prospect of a base client side kustomize being directly integrated into kubectl.

What I am concerned about as a user of kustomize packages is the usage going forward. Between the server side apply use case and kustomize, I think there is a standard set of packages that should be available to third party developers for the composition of resources following the Kubernetes object model.

I'd really like it if/when we all talk about this to make sure we start pulling out that composition from the CLI implementation of kustomize. I strongly believe having primitives for composition available as an SDK/package/API will be valuable and kustomize has spearheaded that effort. Beyond that stake, I am neutral to any reference implementation.

Matt Farina

unread,
Jan 2, 2019, 10:52:06 AM1/2/19
to Gerred Dillon, Phillip Wittrock, Brendan Burns, kubernetes-sig-architecture, kubernetes-sig-cli
Happy New Year!

Brendan, can you add a line about the conversation you would like to have to the SIG Arch agenda? I would suggest no sooner than 1/10 so that everyone is back from their holiday breaks.

Phil, thanks for filling in more detail. Please realize that in sending my original email there was no thought of malice. I am only trying to poke at the process so we can get to more consistency and I'm concerned with the real world implications to people.

As for being surprised, I don't think we should be surprised when people are surprised. Less than a 10th of a percent of people involved in Kubernetes attend SIG CLI. Less than a single percent of the Kubernetes community attends the community meeting in a given week. We are lucky to have so many people using, building tools on top of Kubernetes, and contributing to Kubernetes. Because of this, communication is one of the problems being worked on.

This case, the recent windows discussion, and the conversations on criteria to land features in 1.14 (e.g., the potential for requiring upgrade and downgrade tests) highlight that we need a consistent process around KEPs (in this case Graduation Criteria), automation to aide in that, and to follow through on the process.

This case has raised some process questions for me. For example,

1) I read, "Deviations from what was proposed in the KEP were to address feedback from the PR review feedback." Why were these not folded back into an update to the KEP? KEPs should be updated regularly. This is a question (of which I like to ask a lot) rather than any kind of criticism.

2) How did the graduation criteria for kustomize, in the original KEP for it, end up with:
    • Dogfood kustomize by either:
      • moving one or more of our own (OSS Kubernetes) services to it.
      • getting user feedback from one or more mid or large application deployments using kustomize.
    • Publish kustomize as a subcommand of kubectl.
     I understand that the original KEP went in 3 quarters ago. That recent conversations on testing, docs, and other elements should not be retroactively expected.

    But, can anyone imagine something like StatefulSet having a graduation criteria of moving one single application to it?

    How can we be more consistent in our graduation criteria? If anyone can set the bar this low than why wouldn't they? If "Quality is Job 1", to refer to Brian Grants talk at last months contributor summit, how do we consistently have graduation criteria that reflects that while keeping the community honest in following through?

    I'm not saying that kustomize didn't meet a high bar. I'm just noting that the documented bar is very low. No one ever needed to measure a high bar or talk about what that needed to be. How do we change that?

    3) Why wasn't kustomize implemented as a plugin and it used to help expand and improve the plugin system? If we go back more than a year, all the way back to 2017, the "no" to new features was being thrown around and instead the conversation was on extension points to enable others, outside of Kubernetes core dev, to build them. Some notable examples that come to my mind are time zones on CronJob and the Application object as part of the workloads API. The idea here was that everyone would need to carry the weight of these and it was very important to prioritize extensibility rather than putting everything into Kubernetes.

    In 2018 a lot of work was put into extensibility. At an implementation level this typically meant CRDs and controllers. But, it also meant looking at things like the hooks we have (yeah, someone is digging into those).

    How do we choose when to add extensibility vs adding a feature directly?

    For example, back in the July to August timeframe in SIG CLI we talked about plugin extensibility with events. The idea was designed to help tools like the service catalog CLI but could have applied here. It was not picked up on.

    How do we decide when to make something extensible vs adding a feature? While I use this case as an example I'm more interested in having the discussion in general around the decision process. The ecosystem is growing and experimenting. Where do we let the ecosystem come up with multiple ideas, experiment, and let the market decide? Where and how do we decide to land another feature in Kubernetes?

    To move beyond the process elements, I do have practical questions...
    1. Is the kustomize functionality meant for all the tools that operate over a collection of Kubernetes objects on disk or just kubectl? kubectl is not the only tool that operates on collections of objects. I couldn't find this being documented. If it's not documented than the 99%+ of people who aren't looped in will have to come to their own conclusions. Because this came out of SIG CLI (a piece of insider knowledge I know) I assume it's just kubectl.
    2. If I'm writing a tool in another language, such as python, what do I need to do to replicate the kustomize functionality over there?
    3. What do I need to know about cli-runtime when using it in my own custom tools when going forward because of this? Assume a lot of devs don't have the time or desire to read all the code but just want to know the quick and dirty of the API.

    Then there are the experience and architectural questions...
    1. Assuming this is just meant for kubectl and not others, this means we now have an inconsistent experience among tools that operate on collections of kubernetes objects. For example, a custom tool built using the python client will have a different output of the objects on disk from kubectl. What do we think of that experience? How do we intend the experience to be for end users who use many ecosystem tools?
    2. I just spoke about the end users and experience but we don't really talk much about that. Do we know who our end users are and what they want or need? They aren't the same as the people on these mailing lists. How do we help close that gap?

    To come back around to the people problem. This particular case of a feature landing went against the grain of extensibility over features. That leaves me with seeing two things:
    1. People upset that a feature like this landed against the grain. This is an emotional response, especially for those working on competing ideas and processes where kubectl just picked a winner, from people realizing we aren't always letting the market decide.
    2. We are a project with a lot of people from a lot of vendors. This just showed a fast path to landing a feature outside the direction of extensibility. One of the first reactions I heard to this was someone pondering on the idea of how their company could do that, too. People want to game the system. Here we had a low bar on graduation criteria and a feature against the extensibility grain. Others are already eyeing how they can do that, too. How do we stop that?
    To close this out, can we take a moment to appreciate that the reason we are in this position is because kubernetes is popular. This is all a problem of success. Yay to that!

    - Matt


    --
    Matt Farina

    Go in Practice - A book of Recipes for the Go programming language.

    Code Engineered - A blog on cloud, web, and software development.

    Maciej Szulik

    unread,
    Jan 2, 2019, 12:07:04 PM1/2/19
    to Phillip Wittrock, Matt Farina, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    On Tue, Jan 1, 2019 at 5:47 AM Phillip Wittrock <pwit...@google.com> wrote:
    Hi Matt,

    I was saddened to read your email and sorry that this PR has come as a surprise to you or anyone else.  I'd prefer not to continue this discussion on New Year's Eve, but do want to briefly address some of the concerns in your email.  I'll hold off on sending my detailed response to your email until tomorrow.

    Following is my understanding of the process we followed.  Do correct me on anything where I have gotten a detail wrong.
    • The PR for this change was sent Nov 7 and discussed on the PR for over a month before it was merged - this should have been plenty of time for discussion on the PR itself.
    • It was reviewed both by (non-kustomize contributor) SIG CLI maintainers such as Maciej (@soltysh) and by respected community members that contribute outside SIG CLI (non-kustomize) such as Jordan (@liggitt) and Clayton (@smarterclayton) - it wasn't done by "friends" in a back room.
    • The PR was merged December 19th after having addressed the feedback given during its month long.  This was the Tuesday the week before the typical Holiday break.
    • The PR was cc'ed to the sig-cli mailing list on Nov 15 with the title "PR for enabling kustomize in kubectl" - a full month before it was merged - so that it didn't get lost in GitHub notification filters - so anyone subscribed to the sig cli mailing list should have had ample opportunity to weigh in.
    • The PR was discussed at every sig-cli meeting taking place in November and December and recorded in the notes for anyone to look at.
    • Deviations from what was proposed in the KEP were to address feedback from the PR review feedback.
    • The KEP proposal for this was merged in May (7 months ago).  The most vocal skeptic of kustomize (Joe @jbeda) was directly solicited as an approver for this KEP to make sure their voice could be heard and the approach was reasonable.
    • Kustomize being part of kubectl was mentioned at at least 1 community meeting (I forget which one).
    • The motivation for kustomize was to address user filed issues with using declarative config and kubectl apply.  The fixes implemented in kustomize have also been implemented independently in other ecosystem tools such as Spinnaker, Kubepack and (as I understand it) Pulumi.
    • kustomize was developed independently as an experiment, but designed to be integrated natively into kubectl
    Actually the whole process started almost a month earlier. The initial PR [1] implemented the approach described
    in KEP 8 [2], but myself amongst others complained loudly that including this functionality as a subcommand will
    completely destroy the current layout of commands by adding commands such as 'kubectl kustomize edit add resource',
    not to mention it will confuse users by adding another edit, for example.
    That's why original PR was redesigned and we've started very long process of re-evaluating how to integrate kustomize
    into kubectl (what Phil mentioned earlier), including the fact that currently it is explicit opt-in, versus the original idea.
    Also, from my understanding kustomize was meant to be part of kubectl from the beginning, the only question was when,
    that's how I read both KEPs, still.

    Finally, I hear your concerns about both updating KEP and documentation. The process of integrating kustomize into
    kubectl is taking longer than we initially anticipated, and that's why we reserved entire 1.14 release cycle to fully solve
    that. This will include updating KEP, updating cli-runtime documentation explaining how users can benefit from using
    kustomize, as well as updating kubectl docs itself.

    I'll be more than happy to provide further information during sig-arch call not to make this email even longer ;-)

    Matt Farina

    unread,
    Jan 2, 2019, 12:49:14 PM1/2/19
    to Maciej Szulik, Phillip Wittrock, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli

    Maciej,

    I do have a bunch of practical questions from my other email. If you have details on how y'all planned to address them I would appreciate knowing...

    1. Is the kustomize functionality meant for all the tools that operate over a collection of Kubernetes objects on disk or just kubectl? kubectl is not the only tool that operates on collections of objects. I couldn't find this being documented. If it's not documented than the 99%+ of people who aren't looped in will have to come to their own conclusions. Because this came out of SIG CLI (a piece of insider knowledge I know) I assume it's just kubectl.
    2. If I'm writing a tool in another language, such as python, what do I need to do to replicate the kustomize functionality over there?
    3. What do I need to know about cli-runtime when using it in my own custom tools when going forward because of this? Assume a lot of devs don't have the time or desire to read all the code but just want to know the quick and dirty of the API.

    There was also an experience question you may have feedback on...

    Assuming this is just meant for kubectl and not others, this means we now have an inconsistent experience among tools that operate on collections of kubernetes objects. For example, a custom tool built using the python client will have a different output of the objects on disk from kubectl. What do we think of that experience? How do we intend the experience to be for end users who use many ecosystem tools?

     Any details on these would be helpful.

    Thanks,
    Matt

    Phillip Wittrock

    unread,
    Jan 2, 2019, 5:24:02 PM1/2/19
    to Matt Farina, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    Hey Matt,

    I definitely appreciate you reaching out and making sure this discussion happens.  Discussing further F2F / VC SGTM.  

    Would you be will to come to sig cli meeting to talk about cli-runtime and plugins?  Both would probably be easier than going back and forth on this thread.  We have a lot of stuff we are excited about w.r.t. extensibility and scaling the project (outside plugins).  There also a number of challenges to building kubectl itself as plugins (though I agree with this general direction).  I would offer that IMHO kubectl is less mature than the rest of Kubernetes (e.g. only just getting a function diff command, prior to kustomize - no sensible way of applying secrets, pruning is unsafe, etc), and making quality #1 also means addressing some of fundamental gaps in functionality.  I'd like to know more about your use cases for sig-runtime, and having a tighter feedback loop with its consumers would be fantastic.

    Also glad to discuss graduation criteria at sig architecture.  Kubectl is probably different from the APIs, as it doesn't have API versioning like Kubernetes APIs do - so it might need its own definition of alpha / beta / ga.  The meaning of graduation here was not well communicated in the KEP, and meant including the experiment in kubectl.  I am curious to know more about the process for getting customer feedback on alpha APIs before enabling them by default.  Do we require alpha APIs to have seen significant adoption before making them beta?

    See you soon,
    Phil

    Matt Farina

    unread,
    Jan 3, 2019, 10:18:05 AM1/3/19
    to Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    Would you be will to come to sig cli meeting to talk about cli-runtime and plugins? 

    Certainly. Did you want to have this discussion in the meeting on the 16th?

    I would offer that IMHO kubectl is less mature than the rest of Kubernetes (e.g. only just getting a function diff command, prior to kustomize - no sensible way of applying secrets, pruning is unsafe, etc), and making quality #1 also means addressing some of fundamental gaps in functionality. 

    This raises a few questions:

    1) How do you define maturity and relate it to the rest of Kubernetes? My guess is that this is how we feel about it. Is kubectl mature enough to be production worthy? Of course.

    2) "addressing some of fundamental gaps in functionality" ... what gaps in nationality should be filled by kubectl and what gaps should be filled by the ecosystem? If kubectl fills a gap isn't that picking a winner? We don't want to pick too many process winners when so many people use different processes, right?

    Kubernetes has been moving to the approach of making itself extensible rather than doing all the things or picking the ones that should be in. For example, that means moving to CRDs and custom controllers rather than new built-in objects and core controllers. Why wouldn't kubectl follow the same conceptual model?

    3) When it comes to secrets specifically, many have solved this already in the ecosystem. Is this saying that those methods weren't sensible? I bet that's not what this was saying. Were those ecosystem methods surveyed? It can be difficult to get outside our own heads and methods to look at what others are doing and why.

    For example, I've worked on projects where devs and even operators, without special privileges, did not have access to secret information. Instead automation kept it stored in an encrypted manner and injected it at the right points. This was done in the name of security. If devs and ops could not access the information they could not leak it. How can tools like this work in a flow with kubectl or even a declarative method? Some would say the only sensible way to secure secrets is to make sure people cannot access them.

    I just use this example to show another perspective. One that kustomize in kubectl does not solve for by itself.
     
    I'd like to know more about your use cases for sig-runtime, and having a tighter feedback loop with its consumers would be fantastic.

    I would suggest talking with others more than me. Gerrad pointed to maestro and ship projects earlier in the thread. The devs who work on those would be good people to talk with as well. While I am happy to share, I think it's important to go beyond those of us who are loud and long winded.

    Also glad to discuss graduation criteria at sig architecture.  Kubectl is probably different from the APIs, as it doesn't have API versioning like Kubernetes APIs do - so it might need its own definition of alpha / beta / ga.

    I remember when the docker cli changed some flags, some years ago, and devs who build on top of it were furious. I've heard many people complain at the rate we iterate on the public APIs to things in staging because of all the extra work it causes the ecosystem every quarter. Many people build on top of kubectl meaning the APIs (e.g., the cli flags and commands) are important to them. When those change it impacts them. We should take the API versioning seriously  and consider what an API is for whom.

    The meaning of graduation here was not well communicated in the KEP, and meant including the experiment in kubectl.  I am curious to know more about the process for getting customer feedback on alpha APIs before enabling them by default.  Do we require alpha APIs to have seen significant adoption before making them beta?

    I completely agree on the graduation criteria issues. You aren't the first person to point this out which is part of the reason I'm hammering on it. We can do better.

    While I can't remember what we did with the workloads API to go from alpha to beta I do remember that for beta to GA the betas were being used widely for production and well tested. People were treating them as production quality and generally happy with that and they were stable enough.

    Cheers,
    Matt

    Maciej Szulik

    unread,
    Jan 3, 2019, 11:31:10 AM1/3/19
    to Matt Farina, Phillip Wittrock, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    On Wed, Jan 2, 2019 at 6:49 PM Matt Farina <matt....@gmail.com> wrote:

    Maciej,

    I do have a bunch of practical questions from my other email. If you have details on how y'all planned to address them I would appreciate knowing...

    1. Is the kustomize functionality meant for all the tools that operate over a collection of Kubernetes objects on disk or just kubectl? kubectl is not the only tool that operates on collections of objects. I couldn't find this being documented. If it's not documented than the 99%+ of people who aren't looped in will have to come to their own conclusions. Because this came out of SIG CLI (a piece of insider knowledge I know) I assume it's just kubectl.
    Since kustomize will be part of cli-runtime it's meant for consumption by all cli-runtime consumers. Don't forget that kustomize functionality
    is an optional functionality, for backwards compatibility reasons (sort of API stability) if you don't point to kustomize file explicitly the
    resource builder where kustomize is injected will behave as it was before.
     
    1. If I'm writing a tool in another language, such as python, what do I need to do to replicate the kustomize functionality over there?
    That is a broader problem with  what is supported in different programming languages. cli-runtime from start was and is a building block for kubectl
    thus it's in go, we don't and most probably won't provide similar functionality in other languages, sorry :/ This is similar to client libraries,
    although for client there's a bigger interest in supporting multiple languages. I don't want to go into detail with that, that topic itself deserves
    a separate thread.
     
    1. What do I need to know about cli-runtime when using it in my own custom tools when going forward because of this? Assume a lot of devs don't have the time or desire to read all the code but just want to know the quick and dirty of the API.
    cli-runtime lacks docs in general, even though we've moved it to a separate repository we never said it's production ready,
    but rather a alpha library, until we'll be happy with the general tooling it provides. kubectl itself still is going through a lot
    of internal refactorings which hopefully should allow us to answer the question what is useful for other cli implementators
    as well as to extract it to its own repository.
     
    There was also an experience question you may have feedback on...

    Assuming this is just meant for kubectl and not others, this means we now have an inconsistent experience among tools that operate on collections of kubernetes objects. For example, a custom tool built using the python client will have a different output of the objects on disk from kubectl. What do we think of that experience? How do we intend the experience to be for end users who use many ecosystem tools?

     Any details on these would be helpful.

    HTH,
    Maciej

    Phillip Wittrock

    unread,
    Jan 3, 2019, 1:28:54 PM1/3/19
    to Matt Farina, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli

    Certainly. Did you want to have this discussion in the meeting on the 16th?

    SGTM
     
    1) How do you define maturity and relate it to the rest of Kubernetes? My guess is that this is how we feel about it. Is kubectl mature enough to be production worthy? Of course.

    I think it needs to be defined separately.  Kubectl has made a lot of progress towards moving out of k/k.  I expect within the next few releases it will be able to be released independently, and so may require its own definitions of maturity.
     
    2) "addressing some of fundamental gaps in functionality" ... what gaps in nationality should be filled by kubectl and what gaps should be filled by the ecosystem? If kubectl fills a gap isn't that picking a winner? We don't want to pick too many process winners when so many people use different processes, right?

    I am of the opinion that building a solution and picking a winner may be different things.  I think we need to be careful about saying we won't solve a particular tooling problem because an external project has already tried to solve it.
     
    Kubernetes has been moving to the approach of making itself extensible rather than doing all the things or picking the ones that should be in. For example, that means moving to CRDs and custom controllers rather than new built-in objects and core controllers. Why wouldn't kubectl follow the same conceptual model?

    Matt do you have a reference to the motivation for extensibility?  The focus for kubectl has been to 1) support extension apis / version skewed apis and 2) to move kubectl out of kubernetes/kubernetes and onto a separate release cycle.  These may meet the goals of extensibility more than building kubectl itself as a pluggable system.
     
    3) When it comes to secrets specifically, many have solved this already in the ecosystem. Is this saying that those methods weren't sensible? I bet that's not what this was saying. Were those ecosystem methods surveyed? It can be difficult to get outside our own heads and methods to look at what others are doing and why.

    Yes, Brian wrote a comprehensive document exploring the ecosystem.
     
    For example, I've worked on projects where devs and even operators, without special privileges, did not have access to secret information. Instead automation kept it stored in an encrypted manner and injected it at the right points. This was done in the name of security. If devs and ops could not access the information they could not leak it. How can tools like this work in a flow with kubectl or even a declarative method? Some would say the only sensible way to secure secrets is to make sure people cannot access them.

    One possibility is a GitOps driven workflow where kubectl was run by a bot.  In that case the devs and operators wouldn't need write permission to the cluster at all.
     
    While I can't remember what we did with the workloads API to go from alpha to beta I do remember that for beta to GA the betas were being used widely for production and well tested. People were treating them as production quality and generally happy with that and they were stable enough.

    This makes sense.  +1 to providing stronger guidance on this.  Guidance on alpha -> beta would also be great.
     

    Matt Farina

    unread,
    Jan 3, 2019, 3:32:15 PM1/3/19
    to Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    I think it's worth separating out a few different things...
    1. For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
    2. kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
    3. Then there are the improvements to kubectl and cli-runtime for those who are consuming it. This is squarely a SIG CLI thing and something I chatted with Phil about in a call today. I look forward to talking more with SIG CLI about that and, hopefully, looping in other consumers to give their feedback.

    I am of the opinion that building a solution and picking a winner may be different things.

    Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.

    What solution is kubectl aiming to be?

    Matt do you have a reference to the motivation for extensibility?

    Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.

    Yes, Brian wrote a comprehensive document exploring the ecosystem.

    I'm familiar with this work from Brian. In reference to this conversation, it holds opinions (seen right in the name) on how to do things. When that was being discussed it was apparent that not everyone shared those opinions. How many of whose opinions do we bake in going forward and how many of these opinions to we leave to the ecosystem?

    This is a high level existential question. For Kubernetes we seem to be deciding that by saying new features happen via extensions. Many things have been told to go the CRD/controller route recently and things being backed in are exceptions to this with reasons (e.g., additions to existing core objects). Why doesn't this apply to other parts of the Kubernetes project?

    When we were scoping SIG Apps (including with the charter) we tried to grapple with this. SIG Apps works for existing things (e.g., workloads APIs) and then areas about ecosystem tool interoperability around apps. We need to identify interoperability issue and they go after solutions with a focus on enabling the ecosystem.

    For example, I've worked on projects where devs and even operators, without special privileges, did not have access to secret information. Instead automation kept it stored in an encrypted manner and injected it at the right points. This was done in the name of security. If devs and ops could not access the information they could not leak it. How can tools like this work in a flow with kubectl or even a declarative method? Some would say the only sensible way to secure secrets is to make sure people cannot access them.

    One possibility is a GitOps driven workflow where kubectl was run by a bot.  In that case the devs and operators wouldn't need write permission to the cluster at all.

    Why would the bot use kubectl at all? Why wouldn't it just talk with the Kubernetes API directly using a client?

    In this model the bot isn't declaring something. There is no object written to disk that's getting patched, is there? It's likely getting something out of a data store (the secret and information about where to put it) then talking with the API.

    Just giving another example to illustrate what people can do. I've seen this model in action for infrastructure in the past.

    - Matt

    Maciej Szulik

    unread,
    Jan 4, 2019, 6:58:58 AM1/4/19
    to Matt Farina, Phillip Wittrock, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    On Thu, Jan 3, 2019 at 9:32 PM Matt Farina <matt....@gmail.com> wrote:
    I think it's worth separating out a few different things...
    1. For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
    2. kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
    3. Then there are the improvements to kubectl and cli-runtime for those who are consuming it. This is squarely a SIG CLI thing and something I chatted with Phil about in a call today. I look forward to talking more with SIG CLI about that and, hopefully, looping in other consumers to give their feedback.

    I am of the opinion that building a solution and picking a winner may be different things.

    Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.

    What solution is kubectl aiming to be?

    Matt do you have a reference to the motivation for extensibility?

    Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.

    I've chatted with SIG Service Catalog and the reason they went with their own CLI back then was because kubectl and k8s
    did not allow them to express what they needed back then. Now, that we have both server-side printing and plugins both of
    the problems they've raised are solved and from what I've been talking with Carolyn during KubeCon they'll be slowly moving
    to plugins with svcat. Just to clarify ;-)

    Phillip Wittrock

    unread,
    Jan 5, 2019, 1:16:07 AM1/5/19
    to Matt Farina, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    1. For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
    I look forward to chatting about this.  What this should look like as kubectl moves out of kubernetes/kubernetes and on to a separate release cycle I think will be interesting.
    1. kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
    Note: kubectl has always been opinionated about how it invokes APIs and provides functionality for manipulating + generating resource config.  (See more on this later).  This is still the case, it is just a bit more capable in how it is able to manipulate resource config and generate resources.
     
    1. Then there are the improvements to kubectl and cli-runtime for those who are consuming it. This is squarely a SIG CLI thing and something I chatted with Phil about in a call today. I look forward to talking more with SIG CLI about that and, hopefully, looping in other consumers to give their feedback.
    +1
     

    I am of the opinion that building a solution and picking a winner may be different things.

    Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.

    1. This same argument could be made against all of the workload APIs - Deployments , DaemonSets, StatefulSets, CronJobs - are all "opinions" around how to create Pods and we "picked a winner" when we implemented them.  ReplicaSet could probably be seen as the "un-opinionated" approach for creating Pods.  Providing powerful abstractions in the APIs was the right decision, and we shouldn't be afraid of providing powerful abstractions in our tooling for working with the APIs.

    2. I am also curious where you got the notion that kubectl doesn't have opinions - by and large most kubectl commands are opinionated and this is what differentiates it from a simple CRUD client - kubectl get is opinionated about how it displays objects, kubectl describe is opinionated about how it displays objects, kubectl logs is opinionated about how it allows users to query objects, kubectl edit is an opinionated way of updating an object, kubectl apply is another opinionated way of updating objects, etc.

    3.  FWIW: We've talked about breaking out a subset of kubectl into another tool that is more focussed on only printing and fetching resources.  Is this something you are interested in?
     
    ...
    Matt do you have a reference to the motivation for extensibility?

    Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.

    This supports my early position that kubectl should be focussed on supporting APIs written as extensions (e.g. CRDs) rather than on implementing kubectl itself through a plugin mechanism.  This has already been a large part of our focus - e.g. server-side printing, data-driven commands and plugins are all examples of this; as is work being done to fix built-in commands that don't work with extensions APIs (e.g. kubectl rollout status).  That the Service Catalog issues have already been addressed speaks to SIG CLI having the right focus for kubectl's priorities.
     
    Yes, Brian wrote a comprehensive document exploring the ecosystem.

    I'm familiar with this work from Brian. In reference to this conversation, it holds opinions (seen right in the name) on how to do things. When that was being discussed it was apparent that not everyone shared those opinions. How many of whose opinions do we bake in going forward and how many of these opinions to we leave to the ecosystem?

    I am not quite sure how to quantify an answer to this question.  As I noted earlier - philosophically, baking in opinions is consistent with how kubectl has been developed and the opinions we bake into our APIs are a large part of their value.
     
    This is a high level existential question. For Kubernetes we seem to be deciding that by saying new features happen via extensions. Many things have been told to go the CRD/controller route recently and things being backed in are exceptions to this with reasons (e.g., additions to existing core objects). Why doesn't this apply to other parts of the Kubernetes project?

    I am not sure I understand your question.  Would you explain what you mean by this "Why doesn't this apply to other parts of the Kubernetes project?"?
     
    ...

    One possibility is a GitOps driven workflow where kubectl was run by a bot.  In that case the devs and operators wouldn't need write permission to the cluster at all.

    Why would the bot use kubectl at all? Why wouldn't it just talk with the Kubernetes API directly using a client?

    I don't have a philosophical issue with using the client directly, but it probably wouldn't work well in a workflow where the declared state was checked into some version controller system and pushed to the cluster - e.g. today updating resources from declarative config without apply is non-trivial.  Server-side apply may alleviate some but not all of the issues here.
     

    Matt Farina

    unread,
    Jan 7, 2019, 1:37:32 PM1/7/19
    to Maciej Szulik, Phillip Wittrock, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli

    I've chatted with SIG Service Catalog and the reason they went with their own CLI back then was because kubectl and k8s
    did not allow them to express what they needed back then. Now, that we have both server-side printing and plugins both of
    the problems they've raised are solved and from what I've been talking with Carolyn during KubeCon they'll be slowly moving
    to plugins with svcat. Just to clarify ;-)
     

    Thanks fantastic to hear. Thanks for clarifying.

    Matt Farina

    unread,
    Jan 7, 2019, 2:09:38 PM1/7/19
    to Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    1. For Kubernetes, what is the Graduation Criteria (a.k.a., the definition of done). This is for KEPs and Aaron has already added something to the SIG Arch calendar to talk more about this in relation to releases. I think we can continue to talk about that there. kubectl + kustomaize is just an example as is the recent windows conversation. This is on SIG Arch to dig into rather than SIG CLI or SIG Windows, I think
    I look forward to chatting about this.  What this should look like as kubectl moves out of kubernetes/kubernetes and on to a separate release cycle I think will be interesting.

    I had not thought about things on different release cycles. Do things with different release cycles from kubernetes/kubernetes have different needs for graduation criteria? I'll be curious to see what we hash out.
    1. kubectl has been there to exercise the kubernetes APIs. Now we are looking at it being more. As an existential question, should kubectl be the simplest common denominator by exercising the kubernetes APIs or should it be more? I would like to see SIG Arch, SIG PM, and SIG CLI engage on this with the ecosystem. I find that I'm asking to dig into this existential question because it has an impact on the ecosystem and Kubernetes relationship to it
    Note: kubectl has always been opinionated about how it invokes APIs and provides functionality for manipulating + generating resource config.  (See more on this later).  This is still the case, it is just a bit more capable in how it is able to manipulate resource config and generate resources.

    Here we can get into a slippery slope. When it comes to opinions, whose opinions go in? Can anyone show up and start contributing opinions? Are the choices of opinions metrics driven or personal preference driven?

    Oh, and should we leave the opinions to the ecosystem? How many more opinions should K8s decide on or should we push to have those in the ecosystem so people have have opinions that don't always agree?
     
     

    I am of the opinion that building a solution and picking a winner may be different things.

    Features beyond flexing the kubernetes APIs get into opinions. And, people have differing ones. If kubectl is the lowest common denominator for flexing the kubernetes APIs (which is was until recently) than adding features to it is adding someone's opinion on a solution. If we use the unix philosophy of small things that do one thing well (and we can pipe them together) it would be different tools, that can come from different places, to handle these different features. Baking more into kubectl than API exercising means it's not the unix philosophy anymore and the features going in will often be one person or groups view on what is right (vim vs emacs anyone?). That is where we get the idea of picking a winner. A solution to exercise the kubernetes api is not a solution to do something else.

    1. This same argument could be made against all of the workload APIs - Deployments , DaemonSets, StatefulSets, CronJobs - are all "opinions" around how to create Pods and we "picked a winner" when we implemented them.  ReplicaSet could probably be seen as the "un-opinionated" approach for creating Pods.  Providing powerful abstractions in the APIs was the right decision, and we shouldn't be afraid of providing powerful abstractions in our tooling for working with the APIs.

    The scope for new things is not what it used to be. There was a shift from adding new features in Kubernetes to adding extensibility and new features happening in the ecosystem. Where we need a common API SIGs can tend to work on it. For example, the storage volumes API was implemented as a CRD/controller that's an add-on rather than a core object.

    This is different from how we used to approach things and different from how things were for the workloads API.

    In the past year there were a lot of "not in core, go do it in the ecosystem" responses to new features. These would have gone in the core the years before that.

    2. I am also curious where you got the notion that kubectl doesn't have opinions - by and large most kubectl commands are opinionated and this is what differentiates it from a simple CRUD client - kubectl get is opinionated about how it displays objects, kubectl describe is opinionated about how it displays objects, kubectl logs is opinionated about how it allows users to query objects, kubectl edit is an opinionated way of updating an object, kubectl apply is another opinionated way of updating objects, etc.

    You are very right that kubectl has opinions. Those opinions deal with the way it flexes the k8s APIs. Aren't all interactions with an API somewhat opinionated, even if they are simple CRUD operations?

    I'm sorry I didn't go into enough detail on what I meant about opinions. Here is a little more details.

    We often store the document passed over the k8s APIs in other places (e.g., git). Sometimes this is the source of truth. Other times these outside documents are generated. In these cases they may not be the source of truth. How these documents are created, managed, and so forth can be opinionated and people don't all agree. That's OK. In fact, by enabling different opinions we open the door to opportunities to innovate.

    These documents are passed over the API.

    When it comes to opinions on these documents passed over the API, kubectl has been very light on opinions. Something like Helm has been more opinionated.

    3.  FWIW: We've talked about breaking out a subset of kubectl into another tool that is more focussed on only printing and fetching resources.  Is this something you are interested in?
     
    I and a lot of other people would be interested in this.

    ...
    Matt do you have a reference to the motivation for extensibility?

    Here is another example that came from SIG Service Catalog. They have their own CLI so they can display information with meaning to it. To print things. In their case they would want kubectl to print their custom objects in a way that represents their meaning. This may hold true for things that implement custom API servers (like service catalog) or CRs/CRDs. I use this as an example because in 2018 SIG Service Catalog asked for this of SIG CLI.

    This supports my early position that kubectl should be focussed on supporting APIs written as extensions (e.g. CRDs) rather than on implementing kubectl itself through a plugin mechanism.  This has already been a large part of our focus - e.g. server-side printing, data-driven commands and plugins are all examples of this; as is work being done to fix built-in commands that don't work with extensions APIs (e.g. kubectl rollout status).  That the Service Catalog issues have already been addressed speaks to SIG CLI having the right focus for kubectl's priorities.
     
    As long as the extensibility is there and real world problem are being solved it sounds good to me.

    For example, server side printing handling i18n printing. Because we have many people in other counties (e.g., Korea and China) doing k8s.

    Yes, Brian wrote a comprehensive document exploring the ecosystem.

    I'm familiar with this work from Brian. In reference to this conversation, it holds opinions (seen right in the name) on how to do things. When that was being discussed it was apparent that not everyone shared those opinions. How many of whose opinions do we bake in going forward and how many of these opinions to we leave to the ecosystem?

    I am not quite sure how to quantify an answer to this question.  As I noted earlier - philosophically, baking in opinions is consistent with how kubectl has been developed and the opinions we bake into our APIs are a large part of their value.
     
    Modifying documents passed over APIs is different from interacting with the APIs. Interacting with the APIs is in scope for SIG CLI and kubectl. I don't see workflows for working with the documents outside of k8s being in scope.

    Since people have differing opinions and the ecosystem is already diverging on different ways to handle this, why would it be a good idea for SIG CLI to take on new scope and add those features to compete with the ecosystem?

    While I ask this of SIG CLI it is pertinent to the state of Kubernetes which has shifted to focus on extensibility to enable others to add features rather than doing so itself. As a project, we should be consistent and communicate it with the outside world so they know what to expect.

    This is a high level existential question. For Kubernetes we seem to be deciding that by saying new features happen via extensions. Many things have been told to go the CRD/controller route recently and things being backed in are exceptions to this with reasons (e.g., additions to existing core objects). Why doesn't this apply to other parts of the Kubernetes project?

    I am not sure I understand your question.  Would you explain what you mean by this "Why doesn't this apply to other parts of the Kubernetes project?"?
     
    I'm saying that new features to Kubernetes have had to come through CRDs, controllers, and other extension points. This is the consistent answer in the past year. Why would other things, like kubectl or minikube, that fall in the kubernetes project have different guidance? Shouldn't they too add features through extension points rather than baking them in by default? That the answer is as an extension unless it's an exception with a well documented and justified reason it HAS to be built in?

    One possibility is a GitOps driven workflow where kubectl was run by a bot.  In that case the devs and operators wouldn't need write permission to the cluster at all.

    Why would the bot use kubectl at all? Why wouldn't it just talk with the Kubernetes API directly using a client?

    I don't have a philosophical issue with using the client directly, but it probably wouldn't work well in a workflow where the declared state was checked into some version controller system and pushed to the cluster - e.g. today updating resources from declarative config without apply is non-trivial.  Server-side apply may alleviate some but not all of the issues here.
     
    Why is this a problem that is in the scope of SIG CLI, kubectl, or kubernetes as a whole to solve? Why is this not an ecosystem problem since it has to do with workflows outside of kubernetes that vary from organization to organization?


    --
    Matt Farina

    Tim Hockin

    unread,
    Jan 7, 2019, 2:54:01 PM1/7/19
    to Matt Farina, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    Without commenting on the rest of the thread, as I have not been
    involved in the design of kustomize, I need to clarify something:

    On Mon, Jan 7, 2019 at 11:09 AM Matt Farina <matt....@gmail.com> wrote:

    > The scope for new things is not what it used to be. There was a shift from adding new features in Kubernetes to adding extensibility and new features happening in the ecosystem. Where we need a common API SIGs can tend to work on it. For example, the storage volumes API was implemented as a CRD/controller that's an add-on rather than a core object.

    https://speakerdeck.com/thockin/crds-arent-just-for-add-ons

    I assume you mean the Snapshots API. It is using extension mechanisms
    (CRDs, etc) because that is the direction we want for all APIs. That
    doesn't make it any less part of the project. It is *the* Kubernetes
    API for volume snapshots (an opinion). There may be 3rd-party
    alternatives, but this one is "core".

    "Uses extension mechanisms" and "is part of the core project" are
    largely orthogonal.

    > This is different from how we used to approach things and different from how things were for the workloads API.
    >
    > In the past year there were a lot of "not in core, go do it in the ecosystem" responses to new features. These would have gone in the core the years before that.

    Disagree. Years before, a many of those things would have just been told "no".

    > I'm saying that new features to Kubernetes have had to come through CRDs, controllers, and other extension points. This is the consistent answer in the past year. Why would other things, like kubectl or minikube, that fall in the kubernetes project have different guidance? Shouldn't they too add features through extension points rather than baking them in by default? That the answer is as an extension unless it's an exception with a well documented and justified reason it HAS to be built in?

    There's no room for dogma here. IF the extension points are
    full-featured enough, using them is generally a good thing, IMO. Many
    things could not be CRDs because CRDs were not full-featured enough at
    the time. Some things still can't be CRDs (e.g. Lease).

    Again, I offer this as clarification without stating an opinion on the
    topic under consideration.

    Matt Farina

    unread,
    Jan 7, 2019, 3:37:54 PM1/7/19
    to Tim Hockin, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    Tim, thanks for jumping in to clarify.

    If I understand volume snapshots correctly you need to install the CRD (here's the repo for that). Then you need a CSI driver (out of tree) that implements reacting to the CRD (it's a 3 part system with a VolumeSnapshotClass similar to PV/PVC/SC). The CRD and VSC provides an opt-in API and different controllers can implement the interface. All of which are out of tree from k8s/k8s.

    This makes it opt-in, right? Not just being developed using CRDs even the way we want them for core things.

    I could be missing something and wrong on this. Please correct me if I am.

    This extension nature is what I was getting at. We can opt-in to the interface. Then multiple ecosystem projects can implement it. We can opt-in to those. Even if the CRDs are shipped installed by default that doesn't mean any controllers or VolumeSnapshotClass is there out of the box by what we ship.

    I'm not really all that up on storage so I could be missing some aspect.

    - Matt


    Brian Grant

    unread,
    Jan 7, 2019, 3:41:50 PM1/7/19
    to Matt Farina, kubernetes-sig-architecture, kubernetes-sig-cli
    On Mon, Dec 31, 2018 at 12:23 PM Matt Farina <matt....@gmail.com> wrote:
    Hi SIG CLI folks.  I see the kustomize landed in kubectl.

    From this I have three comments/questions I was hoping you could help me with...

    Thanks for taking a look. 

    1) Can someone please update the KEP for this. In doing so can you please update the metadata to follow the proper values (details here). For example, pending is not a valid status

    Yes, that should be fixed. It was missed during review. A KEP linting tool would be useful. 

    It would also be useful to update the Implementation History section.

    2) How did the graduation criteria get decided and how were they leveraged? I looked at KEP 8 in addition to the KEP for this part. But, I have questions about these things.

    Actually, I'm not sure what "graduation" was intended to mean in this context. I think this KEP was responding to the questions in the template, such as "How will we know that this has succeeded?," rather than "graduation" in the sense of API/implementation maturity. The KEP template probably needs to be clarified. 

    SIG CLI folks: Is the expected kubectl feature lifecycle documented anywhere? Some features, such as server-side printing, the client-side apply overhaul, and diff, have started as alpha subcommands and/or alpha-prefixed flags. kustomize started as a separate command in 2017 (before plugins existed?).

    The deprecation policy is the one place I'm aware of where it's discussed:

    The kubectl conventions doc is one possible place to document this:
    https://github.com/kubernetes/community/blob/master/contributors/devel/kubectl-conventions.md


    For example, a graduation criteria "Publish kustomize as a subcommand of kubectl" and the KEP says it was implemented. But, it doesn't appear this happened.

    As I think others commented, that bullet was obsolete and should be removed.


    And, anything that implements the cli-runtime package now needs handle kustomize situations. How do I go about doing that in my own applications that implement the cli-runtime? I couldn't find the docs for this.

    What do you mean by "implements the cli-runtime package"?

    Do you mean re-implements? Or just uses?

    The cli-runtime readme says:
    "[Do not] Expect compatibility. This repo is direct support of Kubernetes and the API isn't yet stable enough for API guarantees."

    I agree that in order to be ready for consumption it needs documentation, and a versioning scheme and specified maturity levels, ideally not totally different from our other client libraries.


    For reference, in SIG Architecture we just went through a kerfuffle when dealing with windows GA. Things like testing and docs were a big deal. Out of that we want to be slow to add new features and be very clear on graduation criteria including many common things that are on the list of things to figure out. This is why I'm asking about how graduation criteria were chosen and followed through on. To help inform that process.

    We don't have a tracking board for things that aren't KEPs, API reviews, or conformance tests, but defining a quality bar, both generally and for different maturity levels, is one of SIG Architecture's near-term priorities. 

    On the documentation topic specifically, I believe there is some discussion of integrating kustomize.io into kubernetes.io.

    3) How was the usage graduation criteria chosen for kustomize? For reference, it is documented as:

    Dogfood kustomize by either:
    • moving one or more of our own (OSS Kubernetes) services to it.
    • getting user feedback from one or more mid or large application deployments using kustomize.
    From this I could see that the bar is low enough we could shift one experimental k8s project or one non-production mid application to it as enough. That seems fairly low to me. Is there any discussion on how this level was chosen?

    As I mentioned above, I think that was a result of confusion caused by the template.

    The idea of dogfooding the mechanism arose out of discussions that led to the formation of the App Def WG:

    I'm curios because in previous meetings I was there (and I believe so was Joe Beda) and we had talked about the market deciding what is popular and useful to be merged into the core of Kubernetes for features. This is a different direction from those conversations and I am curious how this criteria came about.

    The original ideas for kustomize (back in 2017) were derived from several kubectl feature requests, largely to enable existing kubectl functionality to be invoked more declaratively and to fill gaps in kubectl's apply flow. Some of these were:

    We could dig into those more, but I think what is missing is the context around the bigger picture for the direction of kubectl and related tools. That's needed context for figuring out, for instance, whether command hooks fit or not.

    There was a SIG CLI discussion about this at Kubecon with about a dozen participants. I believe there are notes for people who weren't present, but I don't have a link handy. 


    4) As someone who will need to implement tools that deal with this via cli-runtime and am not just reading directories from the filesystem, where is it documented what I need to do? I imagine I am not the only one who will have this question.

    Note, these 4 items are either to clean up the KEP per the process, questions to inform the GA conversation we're having in SIG Arch, or to help me as an implementer. Please don't feel the need to defend your position in your response to any of these. My only goal is practical moving forward.

    I do have an observation as well...

    If it has not been communicated, k8s is about adding extension points rather than new features. More often than not, if there is an extension point then the direction is to use that and not to add the feature to k8s. Instead, the goal has been to add clean extension points to enable the ecosystem.

    Efforts have been underway for some time to make kubectl more extensible, more dynamic (e.g., no compiled-in API types), and more independent of the rest of kubernetes/kubernetes (e.g., separate repo, separate releases).

    This appears to go against that and I have not been able to find a documented traceable justification. If one exists I would be curios to read it. Something other than one or a small groups opinion.

    The reality is that SIG CLI is a fairly small group. Are you looking for something like a user survey?


    For example, to follow the extension points concept I might have implemented an internal events mechanism allowing an external plugin to intercept events and take action. kustomize could have been one of many plugins to implement that and this would have then enabled the ecosystem.

    Because of the way this was approached and the timing (merged over the holidays), people have had two reactions we should take note of:
    1. They are upset. One thing was prioritized over competing things
    By "prioritized over competing things", are you referring to competing priorities, competing approaches, or both? 

    FWIW, I don't agree that kustomize "competes" with other approaches to customization, and doesn't encroach into areas that have been explicitly out of scope of Kubernetes (and therefore kubectl) essentially forever. It's pretty low-level.

    I would like to use kustomize to drive fixing additional gaps in our API and its discovery data, such as consistent identification of object references.
    1. and without a nice documented reason.
    I interpret this to mean that you feel the motivation specified by the integration KEP (KEP 31) was not sufficient? 
    1. The kind of upset coupled with the next point can breed bad behavior
    2. The feeling that this was a back room deal (e.g., the KEP process was not followed through on but "friends" merged things in a holiday season) and that this is the way others should do things, too
    This second option is not one I thought of on my own. Someone said it to me and that concerns me.

    These are people who haven't been much involved in SIG CLI, I assume?

    I see SIG CLI folks responded to these points later in the thread, but I believe the KEP process was followed, though perhaps not perfectly.
     

    If you've made it this far, thanks for taking the time to read all these words. My concern here is simply process, culture, and keeping a system that can be healthy and long lived. Sometimes that means thinking things through and talking about hard uncomfortable topics.

    - Matt

    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
    To post to this group, send email to kubernete...@googlegroups.com.

    David Emory Watson

    unread,
    Jan 7, 2019, 3:51:47 PM1/7/19
    to Tim Hockin, Matt Farina, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    I believe there is a subtext here worth pointing out. Isn't this argument really about helm (i.e declarative vs. imperative management)?

    David.

    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
    To post to this group, send email to kubernetes-si...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAO_RewY-RykFFuqW5j9XSinYHxsjCU0yQMzD5YuObrsVPs0rWQ%40mail.gmail.com.

    Tim Hockin

    unread,
    Jan 7, 2019, 4:03:07 PM1/7/19
    to Matt Farina, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    On Mon, Jan 7, 2019 at 12:37 PM Matt Farina <matt....@gmail.com> wrote:
    >
    > Tim, thanks for jumping in to clarify.
    >
    > If I understand volume snapshots correctly you need to install the CRD (here's the repo for that). Then you need a CSI driver (out of tree) that implements reacting to the CRD (it's a 3 part system with a VolumeSnapshotClass similar to PV/PVC/SC). The CRD and VSC provides an opt-in API and different controllers can implement the interface. All of which are out of tree from k8s/k8s.

    It's a CRD I expect to be part of every cluster. It may not be
    implemented, just as Ingress and NetworkPolicy might not be
    implemented. That doesn't make it an add-on. It is no less an add-on
    than PersistentVolumes or CSI.

    > This makes it opt-in, right? Not just being developed using CRDs even the way we want them for core things.
    >
    > I could be missing something and wrong on this. Please correct me if I am.
    >
    > This extension nature is what I was getting at. We can opt-in to the interface. Then multiple ecosystem projects can implement it. We can opt-in to those. Even if the CRDs are shipped installed by default that doesn't mean any controllers or VolumeSnapshotClass is there out of the box by what we ship.
    >
    > I'm not really all that up on storage so I could be missing some aspect.

    Your last point is what I would say is most accurate. The API is part
    of the project. Sadly, arguments about Storage and Networking are not
    likely to hold in parallel for CLIs. :)

    Matt Farina

    unread,
    Jan 7, 2019, 5:47:09 PM1/7/19
    to Tim Hockin, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    The API is part
    of the project.  Sadly, arguments about Storage and Networking are not
    likely to hold in parallel for CLIs. :)

    There is a really neat thing for CLIs. If they treat the Kubernetes objects as their shared API you can pipe one to another. The style is different, for sure. The API is there for them to work together, though.

    Of course, all of this is workflow stuff outside of Kubernetes itself. That place we all love to debate... anyone up for a vim vs vscode debate? ;)

    Matt Farina

    unread,
    Jan 7, 2019, 6:03:24 PM1/7/19
    to David Emory Watson, Tim Hockin, Phillip Wittrock, Maciej Szulik, Jingfang Liu, kubernetes-sig-architecture, kubernetes-sig-cli
    I believe there is a subtext here worth pointing out. Isn't this argument really about helm (i.e declarative vs. imperative management)?

    I'll take this one on. This isn't about Helm and I can see how some might think that.

    When we are crafting the objects outside of Kubernetes that we will pass to the API there are opportunities for many workflows. For example, I could picture visual editors (people love their UIs). We may declare to Kubernetes what we want to happen but how we come up with what to declare is a space we could use some innovation in and people already have many varying workflows.

    Consider this, everything is imperative at some point. Me pecking at keys to type or change elements of a file is imperative. When it comes to editing files outside of Kubernetes how much needs to be declarative vs imperative and at what point in the process? When we pass them to kubernetes through the API we are declaring. But, when we are on the outside of the API constructing things how much should kubernetes have an opinion on that? How much of that is the scope of the kubernetes project and how much of that is left to the ecosystem? In an ecosystem where people can have those vim vs vscode debates. I wonder what the emacs folks think of my vs.

    Personally, if there is a winner of a way to do things I would like it to be the thing that had such a fantastic experience the market chose it. Or them... because there is usually space for multiple winners unless your philosophy is winner take all.

    Just my 2 cents.

    Maciej Szulik

    unread,
    Jan 8, 2019, 10:21:33 AM1/8/19
    to Brian Grant, Matt Farina, kubernetes-sig-architecture, kubernetes-sig-cli
    On Mon, Jan 7, 2019 at 9:41 PM 'Brian Grant' via kubernetes-sig-cli <kubernete...@googlegroups.com> wrote:
    On Mon, Dec 31, 2018 at 12:23 PM Matt Farina <matt....@gmail.com> wrote:
    Hi SIG CLI folks.  I see the kustomize landed in kubectl.

    From this I have three comments/questions I was hoping you could help me with...

    Thanks for taking a look. 

    1) Can someone please update the KEP for this. In doing so can you please update the metadata to follow the proper values (details here). For example, pending is not a valid status

    Yes, that should be fixed. It was missed during review. A KEP linting tool would be useful. 

    It would also be useful to update the Implementation History section.

    2) How did the graduation criteria get decided and how were they leveraged? I looked at KEP 8 in addition to the KEP for this part. But, I have questions about these things.

    Actually, I'm not sure what "graduation" was intended to mean in this context. I think this KEP was responding to the questions in the template, such as "How will we know that this has succeeded?," rather than "graduation" in the sense of API/implementation maturity. The KEP template probably needs to be clarified. 

    SIG CLI folks: Is the expected kubectl feature lifecycle documented anywhere? Some features, such as server-side printing, the client-side apply overhaul, and diff, have started as alpha subcommands and/or alpha-prefixed flags. kustomize started as a separate command in 2017 (before plugins existed?).

    The below are the only ones we have and are following them so far either when removing functionality.
    Adding is different, like mentioned a lot of different approaches have been tried in the mean time to
    asses what's the best possible way, but we haven't picked any since each case is different,
    we usually discuss them during our bi-weekly meetings and make an agreement how to proceed.
    last December meeting.
     
    Reply all
    Reply to author
    Forward
    0 new messages