RFC: Gathering data for Kubectl direction 2019

131 views
Skip to first unread message

Phillip Wittrock

unread,
Jan 17, 2019, 12:45:43 PM1/17/19
to kubernetes-sig-cli, Matt Farina
Following up on yesterday's sig meeting.

The topic of how to engage the broader ecosystem and community of users has been brought up a couple times now, and it would be great to have a better radar.

Matt I believe you have performed survey's in the past to gather this sort of information.  Does this seem like a good approach to start?  I'd be willing to create the survey in Google Surveys, Survey Monkey, or whatever tool we have used in the past if you could help run it and make is successful.  Is there anyone else you would recommend that I include on the thread?

Matt are there specific questions you have or are interested in.  This is probably too many questions, but I'd love the answers to all of them.  :P.  In a couple questions I listed some tools in the ecosystem (e.g. do you need kubectl to work with these), this could be an empty text field if we are concerned about implying winners through the survey.  Thoughts?

Who am I? (select all that apply)
  • Kubernetes user - DevOps
  • Kubernetes user - Dev
  • Ecosystem Dev
  • Kubernetes Contributor
For my job, Kubectl is
  • Critical
  • Necessary
  • Optional
I manually run kubectl
  • Daily
  • Weekly
  • Rarely
I use kubectl for (select all that apply)
  • Development
  • Routine production operations
  • Debugging production events
I want kubectl to focus on.  Score the following in importance (0-3):
  • Stability + Version Skew Support
  • Better Declarative App Management (e.g. apply, kustomize)
  • Better integration with CICD and git ops workflows
  • Better extension support
I use the following commands. Score the following in importance (0-3):
  • get
  • apply
  • describe
  • edit
  • run / expose / create <type> / set <field> / scale / autoscale
  • attach / exec / logs / top / port-forward / proxy / cp / attach
  • cordon / uncordon / drain / taint
  • convert / replace
For stability I want kubectl to focus on. Score the following in importance (0-3):
  • Bug fixes
  • Consistency across commands
  • Better error messaging
  • Greater version skew support (client-server)
  • More frequent releases
  • More / better documentation
  • More opinionated documentation (e.g. CD workflows)
For declarative app management I want kubectl to focus on.  Score the following in importance (0-3):
  • Managing Resource Config for multiple environments
  • Managing Resource Config for multiple clusters
  • Managing Resource Config across multiple teams
  • Publishing Resource Config for others to consume (e.g. on GitHub)
  • Consuming Resource Config published (e.g. on GitHub)
  • Authoring Resource Config files
  • Customizing Resource Config files
For CICD and git ops I want kubectl to focus on.  Score the following in importance (0-3):
  • Works well with manual pushes
  • Works well with DIY CICD
  • Works well with Git ops
  • Works well with Jenkins
  • Works well with Spinnaker
  • Works well with Helm
  • Works well with Ksonnet
  • Works well with Scaffold
  • Other
For extension support I want kubectl to focus on.  Score the following in importance (0-3):
  • API agnostic command support for extensions APIs (e.g. apply, diff, rollout status, edit)
  • API specific command support for extension APIs (e.g. get, describe, create <your-type>, set <your-field>)
  • Better integration with tools developed in the ecosystem
  • Plugin support for arbitrary new commands
  • Plugin support for extending internal kubectl behavior (e.g. change what the -f flag does)
I want plugins to provide.  Score the following in importance (0-3):
  • Discoverability
  • Distribution
  • Consistency of command structure look-and-feel
  • Consistency of flag look-and-feel
  • Other <specify>
Which command should kubectl focus on making better:
  • <list of commands>

To help the kubectl maintainers develop the product, score your support of the following (0-3):
  • Add kubectl sub-commands and whitelisted flags (e.g. -o yaml) as request headers to requests sent to your apiserver (opt-out)
  • Send kubectl sub-commands and whitelisted flags to another service owned by the CNCF (opt-in)

Ahmet Alp Balkan

unread,
Jan 17, 2019, 1:41:37 PM1/17/19
to Phillip Wittrock, kubernetes-sig-cli, Matt Farina
Would this survey be a good avenue to collect data on what other tools people use kubectl with? Tools like wrk/stern, kubectx, kube-ps1... come up in other surveys (like SIG Apps etc) frequently.

It might be a good indicator of what functionality  people are looking to augment/add in kubectl.

Also, I recall there was some talk about kubectl analytics. Would this be also a good place to ask whether people are OK with it?

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cli/CAPzFYDuHpQZHkZo0bo8E%3DR1GvDPSaWHxgeP6JsSER%2BpHzfkZNw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Phillip Wittrock

unread,
Jan 17, 2019, 7:46:06 PM1/17/19
to Ahmet Alp Balkan, kubernetes-sig-cli, Matt Farina

Would this survey be a good avenue to collect data on what other tools people use kubectl with? Tools like wrk/stern, kubectx, kube-ps1... come up in other surveys (like SIG Apps etc) frequently.
 
It might be a good indicator of what functionality  people are looking to augment/add in kubectl.

Yeah, or at least where the gaps are.  Doesn't necessarily mean it should be added to kubectl, but we should make sure kubectl works well with them at least.
 
Also, I recall there was some talk about kubectl analytics. Would this be also a good place to ask whether people are OK with it?

I added a question about this.  We need to improve the wording though.

Matt Farina

unread,
Jan 22, 2019, 11:03:02 AM1/22/19
to Phillip Wittrock, kubernetes-sig-cli
Phil,

I'm happy to share since I've worked on a few surveys for Kubernetes.

One thing that is really important, and that we sometimes struggle with in surveys, is getting the survey out to our target audience. Quite often the people answering are the people who work on Kubernetes and are part of the crowd that shows up to meetings and to contribute. This crowd is a fraction of a percent of users.

To combat that I would partner with some organizations, like the CNCF end-user community, to get good answers from a larger group. I would reach out to Cheryl Hung at the CNCF on this.

Who am I? (select all that apply)
  • Kubernetes user - DevOps
  • Kubernetes user - Dev
  • Ecosystem Dev
  • Kubernetes Contributor
Might I suggest a list similar to:
  • Cluster operator - I make sure a cluster is up and running
  • Application operator - I operate applications in Kubernetes
  • Application developer - I develop applications that need to be operated in platforms like Kubernetes
  • Kubernetes ecosystem project developer - I develop tools to supplement Kubernetes in my use
  • Kubernetes contributor - I contribute to the Kubernetes project
  • CNCF contributor - I contribute to CNCF projects other than Kubernetes
I like the idea of having a select all that apply
 
In addition to these it might be good to get demographic information about company size and other characteristics. In the last survey we did for SIG Apps we did not get this and it was asked about right away.

For my job, Kubectl is
  • Critical
  • Necessary
  • Optional
What is it you want to get out of this question?

I ask because there are many things people are required to do where kubectl is the only CLI tool. So, if someone uses the CLI they use kubectl. If they use a UI instead they might use something different. Is there a deeper question you're curious about?

It's also useful to realize that for things that are part of the kubernetes project it's often highly discouraged to create a competitor for. In some past cases things have been asked to stop work and point to the official thing instead of competing. I highlight this to note that this is an environment with no real CLI competition. For many it's critical because it's the only option. There are non-technical factors at play here.
 
I manually run kubectl
  • Daily
  • Weekly
  • Rarely
I use kubectl for (select all that apply)
  • Development
  • Routine production operations
  • Debugging production events
The production of cluster operations or the production or application operations or both?

 
I want kubectl to focus on.  Score the following in importance (0-3):
  • Stability + Version Skew Support
  • Better Declarative App Management (e.g. apply, kustomize)
  • Better integration with CICD and git ops workflows
  • Better extension support
Two things here to think about.

First, what is an extension? Phil brought up the idea recently of the unix philosophy and piping things. Funny timing, Jess Frazelle just blogged about that. For Kubernetes things there are two types of things that can be passed around as I/O. One is Kubernetes objects in YAML form. A second is the output of things like kubectl style lists of objects (generated from printers). These are used today. Another form is plugins that alter how applications work. For example, VIM extensions.

When "better extension support" is asked for different people have different things in mind. What does this mean? Does this mean better for piping or better for extending functionality? That's not clear from the questions.

Second, I'm not sure what "Better Declarative App Management" means. Does this mean app management as the apps are inside of Kubernetes (e.g., their declared location is inside the API/in etcd) or better app management for working with objects outside of Kubernetes to pass to the API? or both? How many people will know what this means or the intent behind it?

When I think of App management in AWS, for example, I think of going to the AWS CLI or console and performing commands against the API. That's not exactly what this means.

The further out you go from the core contributors the less people will know what this means. If everyone gets the current language it's because they are insiders. If they aren't insiders and they pick it their meaning may be quite different from yours causing problem data.

Note, I'm commenting on the language and who it's communicated to rather than the topic.
 
I use the following commands. Score the following in importance (0-3):
  • get
  • apply
  • describe
  • edit
  • run / expose / create <type> / set <field> / scale / autoscale
  • attach / exec / logs / top / port-forward / proxy / cp / attach
  • cordon / uncordon / drain / taint
  • convert / replace
For stability I want kubectl to focus on. Score the following in importance (0-3):
  • Bug fixes
  • Consistency across commands
  • Better error messaging
  • Greater version skew support (client-server)
  • More frequent releases
  • More / better documentation
  • More opinionated documentation (e.g. CD workflows)
What is "opinionated documentation" and who is it from?

The example there is CD but is that continuous delivery or continuous deployment? In that space there are many workflows. For example, if you are talking about deployment do we have canary, blue/green, or one of the many other deployment processes? Or, all of them? What about the cases where it's not continuous? For example, Netflix famously does not update many things during peak hours.

Then there are outside tools. Much of this is going to talk about outside tools? If you talk CD how do you not talk about tools? If Jenkins is listed then the other projects are going to want to be listed, too. How does that get navigated?

All of this leads to opinions. Whose opinions get into the opinionated docs and how are those decided?

In SIG Apps we have had many demos on CI/CD. It's amazing how many different ways people do things. Sometimes it's startup experimentation. Sometimes it's people solving niche problems. Sometimes it's common workflows (and there are several on diverging paths). We have not documented how to do these things because there is no one right way.

So, who gets to decide whose opinion wins by going into the docs?
 
For declarative app management I want kubectl to focus on.  Score the following in importance (0-3):
  • Managing Resource Config for multiple environments
  • Managing Resource Config for multiple clusters
  • Managing Resource Config across multiple teams
  • Publishing Resource Config for others to consume (e.g. on GitHub)
  • Consuming Resource Config published (e.g. on GitHub)
  • Authoring Resource Config files
  • Customizing Resource Config files
Two things to consider...

Just so we're on the same page, this question is a topic currently out of SIG CLI scope. In the SIG CLI charter scope it says, "This group focuses on general purpose command line tools and libraries to interface with Kubernetes API's." This question is about moving beyond that.

This used to be an area of focus for SIG Apps but a few things happened...
  1. For ways SIG Apps approached things there are multiple other ways people attacked the same thing. We wanted to let the ecosystem choose winners rather than SIG Apps. Just like vim and emacs there can be multiple winners. There is no need for there to be only one way
  2. The steering committee decided to take these opinions outside of the K8s API and treat them as ecosystem that is grandfathered in. That way to not anoint a winning process or tool set.
The questions here suggest taking back scope into the Kubernetes project. From a political standpoint (let's be honest that these happen) it will be framed as some projects were pushed to the ecosystem from one SIG so that another SIG/group could take over the scope and do it a different way. Whether this is true or not it will be discussed.

It will also be undoing the point from #1 that has to do with the different ways to do things and enabling all of them

While I have talked about minikube and other ecosystem projects doing the same thing being asked to stop after minikube came into k8s there are other examples. When Helm became the popular package manager for Kubernetes there were still others who wanted to innovate and try in that space. But, because Helm was part of k8s some people said not to try. Not to complete against k8s. This proved to be a barrier and was a justification for moving Helm to be treated as ecosystem.

I could come up with other examples. My point is that folding these features into kubectl would be a scope addition that alters the direction ecosystem projects feel or are even told to go in. Something the K8s project has said it didn't want to do outside of the API. Changes to that should be discussed elsewhere.

Second, highlighting GitHub brings up a great point on innovation that is still happening here. Just last week there was an OCI meeting where CNAB, docker apps, and Helm came together to talk about putting other objects in container registries. Those that define apps (like Helm charts or CNAB bundles). This is moving forward. Thinking in terms of VCS for storage and sharing of objects may be changing and it's not going to happen all at once. Innovation is happening in the ecosystem and it needs to be encouraged and not baked into K8s where things go to become defacto processes.

How would this innovation fit into kubectl tools and communication?
 
I ran out of time to review the rest of the questions. I'll try to circle back later.

Cheers,
Matt

Phillip Wittrock

unread,
Jan 22, 2019, 12:03:36 PM1/22/19
to Matt Farina, kubernetes-sig-cli
One thing that is really important, and that we sometimes struggle with in surveys, is getting the survey out to our target audience. Quite often the people answering are the people who work on Kubernetes and are part of the crowd that shows up to meetings and to contribute. This crowd is a fraction of a percent of users.

+1.  This + logistics  were the biggest questions I had.
 
To combat that I would partner with some organizations, like the CNCF end-user community, to get good answers from a larger group. I would reach out to Cheryl Hung at the CNCF on this.

Thanks.  This is the sort of thing I was looking for.
 
Who am I? (select all that apply)
  • Kubernetes user - DevOps
  • Kubernetes user - Dev
  • Ecosystem Dev
  • Kubernetes Contributor
Might I suggest a list similar to:
  • Cluster operator - I make sure a cluster is up and running
  • Application operator - I operate applications in Kubernetes
  • Application developer - I develop applications that need to be operated in platforms like Kubernetes
  • Kubernetes ecosystem project developer - I develop tools to supplement Kubernetes in my use
  • Kubernetes contributor - I contribute to the Kubernetes project
  • CNCF contributor - I contribute to CNCF projects other than Kubernetes
I like the idea of having a select all that apply

+1
 
 
In addition to these it might be good to get demographic information about company size and other characteristics. In the last survey we did for SIG Apps we did not get this and it was asked about right away.

For my job, Kubectl is
  • Critical
  • Necessary
  • Optional
What is it you want to get out of this question?

I ask because there are many things people are required to do where kubectl is the only CLI tool. So, if someone uses the CLI they use kubectl. If they use a UI instead they might use something different. Is there a deeper question you're curious about?

I was hoping to provide context for the other answers given.  E.g. If the user has an automated gitops workflow (e.g. kube applier), they may not need to use kubectl directly.  Or if they primarily interact with tools that wrap kubectl.  Or maybe they just use if for break glass.  etc
 
It's also useful to realize that for things that are part of the kubernetes project it's often highly discouraged to create a competitor for. In some past cases things have been asked to stop work and point to the official thing instead of competing.

I wasn't aware of this, and am not sure I fully grasp your meaning.  Is this documented as an official stance of the project somewhere?  e.g. questions I would have are - does this only apply to\ competing subprojects within Kubernetes itself, within the CNCF, or anywhere.  Since the word competition could mean a lot of different things to different folks, we should come up with a more concrete definition if we are going to use it for decision making or guidance.  If you think this is impacting many decisions in the project, lets move it to its own discussion where more folks can participate - e.g. a document in GitHub or something.

I use kubectl for (select all that apply)
  • Development
  • Routine production operations
  • Debugging production events
The production of cluster operations or the production or application operations or both?

Both / Either.
 
I want kubectl to focus on.  Score the following in importance (0-3):
  • Stability + Version Skew Support
  • Better Declarative App Management (e.g. apply, kustomize)
  • Better integration with CICD and git ops workflows
  • Better extension support
Two things here to think about.

First, what is an extension? Phil brought up the idea recently of the unix philosophy and piping things. Funny timing, Jess Frazelle just blogged about that. For Kubernetes things there are two types of things that can be passed around as I/O. One is Kubernetes objects in YAML form. A second is the output of things like kubectl style lists of objects (generated from printers). These are used today. Another form is plugins that alter how applications work. For example, VIM extensions.

When "better extension support" is asked for different people have different things in mind. What does this mean? Does this mean better for piping or better for extending functionality? That's not clear from the questions.
 
Second, I'm not sure what "Better Declarative App Management" means. Does this mean app management as the apps are inside of Kubernetes (e.g., their declared location is inside the API/in etcd) or better app management for working with objects outside of Kubernetes to pass to the API? or both? How many people will know what this means or the intent behind it?

When I think of App management in AWS, for example, I think of going to the AWS CLI or console and performing commands against the API. That's not exactly what this means.

The further out you go from the core contributors the less people will know what this means. If everyone gets the current language it's because they are insiders. If they aren't insiders and they pick it their meaning may be quite different from yours causing problem data.

Note, I'm commenting on the language and who it's communicated to rather than the topic.

I'll work on the wording.  I think these are the questions I find most interesting, and I don't want to skew the data (e.g. results are impact because folks know what some answers mean but not others).
 
I use the following commands. Score the following in importance (0-3):
  • get
  • apply
  • describe
  • edit
  • run / expose / create <type> / set <field> / scale / autoscale
  • attach / exec / logs / top / port-forward / proxy / cp / attach
  • cordon / uncordon / drain / taint
  • convert / replace
For stability I want kubectl to focus on. Score the following in importance (0-3):
  • Bug fixes
  • Consistency across commands
  • Better error messaging
  • Greater version skew support (client-server)
  • More frequent releases
  • More / better documentation
  • More opinionated documentation (e.g. CD workflows)
What is "opinionated documentation" and who is it from?

The example there is CD but is that continuous delivery or continuous deployment? In that space there are many workflows. For example, if you are talking about deployment do we have canary, blue/green, or one of the many other deployment processes? Or, all of them? What about the cases where it's not continuous? For example, Netflix famously does not update many things during peak hours.

Then there are outside tools. Much of this is going to talk about outside tools? If you talk CD how do you not talk about tools? If Jenkins is listed then the other projects are going to want to be listed, too. How does that get navigated?

All of this leads to opinions. Whose opinions get into the opinionated docs and how are those decided?

In SIG Apps we have had many demos on CI/CD. It's amazing how many different ways people do things. Sometimes it's startup experimentation. Sometimes it's people solving niche problems. Sometimes it's common workflows (and there are several on diverging paths). We have not documented how to do these things because there is no one right way.

So, who gets to decide whose opinion wins by going into the docs?

Definitely things we would need to discuss if we consider doing something in the area.  I am curious if this is something folks even want (I've heard that it is, but only anecdotally).
 
 For declarative app management I want kubectl to focus on.  Score the following in importance (0-3):
  • Managing Resource Config for multiple environments
  • Managing Resource Config for multiple clusters
  • Managing Resource Config across multiple teams
  • Publishing Resource Config for others to consume (e.g. on GitHub)
  • Consuming Resource Config published (e.g. on GitHub)
  • Authoring Resource Config files
  • Customizing Resource Config files
Two things to consider...

Just so we're on the same page, this question is a topic currently out of SIG CLI scope. In the SIG CLI charter scope it says, "This group focuses on general purpose command line tools and libraries to interface with Kubernetes API's." This question is about moving beyond that.

This used to be an area of focus for SIG Apps but a few things happened...
  1. For ways SIG Apps approached things there are multiple other ways people attacked the same thing. We wanted to let the ecosystem choose winners rather than SIG Apps. Just like vim and emacs there can be multiple winners. There is no need for there to be only one way
  2. The steering committee decided to take these opinions outside of the K8s API and treat them as ecosystem that is grandfathered in. That way to not anoint a winning process or tool set.
The questions here suggest taking back scope into the Kubernetes project. From a political standpoint (let's be honest that these happen) it will be framed as some projects were pushed to the ecosystem from one SIG so that another SIG/group could take over the scope and do it a different way. Whether this is true or not it will be discussed.

It will also be undoing the point from #1 that has to do with the different ways to do things and enabling all of them

While I have talked about minikube and other ecosystem projects doing the same thing being asked to stop after minikube came into k8s there are other examples. When Helm became the popular package manager for Kubernetes there were still others who wanted to innovate and try in that space. But, because Helm was part of k8s some people said not to try. Not to complete against k8s. This proved to be a barrier and was a justification for moving Helm to be treated as ecosystem.

I could come up with other examples. My point is that folding these features into kubectl would be a scope addition that alters the direction ecosystem projects feel or are even told to go in. Something the K8s project has said it didn't want to do outside of the API. Changes to that should be discussed elsewhere.

Second, highlighting GitHub brings up a great point on innovation that is still happening here. Just last week there was an OCI meeting where CNAB, docker apps, and Helm came together to talk about putting other objects in container registries. Those that define apps (like Helm charts or CNAB bundles). This is moving forward. Thinking in terms of VCS for storage and sharing of objects may be changing and it's not going to happen all at once. Innovation is happening in the ecosystem and it needs to be encouraged and not baked into K8s where things go to become defacto processes.

How would this innovation fit into kubectl tools and communication?
 
I ran out of time to review the rest of the questions. I'll try to circle back later.

Perhaps the wording on these should be more targeted.  However, I am curious about how users would respond, even if we don't plan on doing anything different in these areas.  Kubectl already provides low-level tooling that can be used in these contexts, so a take away could be to better document how to use what is already there.  If no one is using it that would be good to know.  If folks wish it was better, that would also be good to know.  For context: kubectl does provide the `--context` flag to allow users to switch back and forth between kubeconfig contexts (i.e. clusters).  It also has the `-n` flag for namespaces (i.e. environments).  `Kubectl apply -f ` works against urls.  `kubectl create secret` authors Resource Config.  `kubect scale` customizes Resource Config.

- Phil

Matt Farina

unread,
Jan 23, 2019, 11:08:26 AM1/23/19
to Phillip Wittrock, kubernetes-sig-cli

For my job, Kubectl is
  • Critical
  • Necessary
  • Optional
What is it you want to get out of this question?

I ask because there are many things people are required to do where kubectl is the only CLI tool. So, if someone uses the CLI they use kubectl. If they use a UI instead they might use something different. Is there a deeper question you're curious about?

I was hoping to provide context for the other answers given.  E.g. If the user has an automated gitops workflow (e.g. kube applier), they may not need to use kubectl directly.  Or if they primarily interact with tools that wrap kubectl.  Or maybe they just use if for break glass.  etc

This is good context to know. What I found useful was leveraging a description box on the question to provide some context to the answers.
 
 
It's also useful to realize that for things that are part of the kubernetes project it's often highly discouraged to create a competitor for. In some past cases things have been asked to stop work and point to the official thing instead of competing.

I wasn't aware of this, and am not sure I fully grasp your meaning.  Is this documented as an official stance of the project somewhere?  e.g. questions I would have are - does this only apply to\ competing subprojects within Kubernetes itself, within the CNCF, or anywhere.  Since the word competition could mean a lot of different things to different folks, we should come up with a more concrete definition if we are going to use it for decision making or guidance.  If you think this is impacting many decisions in the project, lets move it to its own discussion where more folks can participate - e.g. a document in GitHub or something.

I doubt it's documented but it is a practice (a tribal practice?). I'll provide you with two examples I'm personally aware of.

First, when minikube was brought into Kubernetes other projects in this space were contacted and asked to stop work and point to minikube as the official Kubernetes project for this. The tools I used had a mac native UI (similar to what I have with Docker for Mac today) and allowed me to locally have either a single instance or a cluster. It was discontinued due to this direction. This is a practical example of this happening.

Second, I have been approached and asked about getting a project into kubernetes-sigs (a more recent example) so that their project could become the official way and to use it to win the market battle over competing projects in the ecosystem. They did express this as an explicit intent. I was rather surprised at the honesty but appreciated it.

Sometimes there is perception that Kubernetes projects are THE way. Sometimes people explicitly state it. This is why I care about the way we interface with the ecosystem.
 
How would this innovation fit into kubectl tools and communication?
 
I ran out of time to review the rest of the questions. I'll try to circle back later.

Perhaps the wording on these should be more targeted.  However, I am curious about how users would respond, even if we don't plan on doing anything different in these areas.  Kubectl already provides low-level tooling that can be used in these contexts, so a take away could be to better document how to use what is already there.  If no one is using it that would be good to know.  If folks wish it was better, that would also be good to know.  For context: kubectl does provide the `--context` flag to allow users to switch back and forth between kubeconfig contexts (i.e. clusters).  It also has the `-n` flag for namespaces (i.e. environments).  `Kubectl apply -f ` works against urls.  `kubectl create secret` authors Resource Config.  `kubect scale` customizes Resource Config.

Which folks would wish what is better? People doing DevOps for this apps? App devs writing Kubernetes native apps (that leverage parts of k8s as part of their structure)? App operators in organizations that already have their own tools and need to integrate k8s along with them? Many people don't want to change their whole tool chain and process. Many people aren't in a position to do that, either. What about people going GitOps (which is different than the CI driven ops)?

There are several roles here and even variation in styles among people performing those roles. And, like vim vs emacs, they are happy to debate each other.

When we do the SIG Apps surveys our goal is to capture the need and what's going on in the space so that many people can apply the knowledge to their tools.

This survey is "leading the witness" along with being targeted at one tool. Should it be opened up to explore more information that could inform the path forward on things?

- Matt

Brian Grant

unread,
Jan 23, 2019, 11:13:06 PM1/23/19
to Matt Farina, Phillip Wittrock, kubernetes-sig-cli
Focusing on just kubectl is reasonable for a SIG CLI survey. It's a positive development for SIGs to want to gather more information about how what they are building is used.

The high-level areas we need to explore are pretty basic:
  • What / how do users use kubectl for?
  • Which areas do users prefer other tools instead?
  • Does kubectl work sufficiently well with its uses and with those other tools?
  • What would users like it to do better?
So, not overlapping with SIG Apps, but more about context regarding how kubectl fits with the bigger picture for users, including tools and scenarios discussed in SIG Apps.

The feedback from the SIG Apps survey results was somewhat useful (e.g. confirmation that several people like kubectx and kubens), but the open-ended nature of the questions made it fairly difficult to extract coherent signals.

Thanks for the suggestion that we reach out for help in distributing the survey.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.

Maciej Szulik

unread,
Jan 25, 2019, 8:41:17 AM1/25/19
to Brian Grant, Matt Farina, Phillip Wittrock, kubernetes-sig-cli
I don't think that was mentioned before but it would be worth also to gather information
about version skew people are using, I mean kubectl vs apiserver, are they keeping up
to date in supported +/-1 or they are above that.

I realized it's something worth exploring while we're talking about version skew tests in other thread.

Maciej



Maciej Szulik

unread,
Jan 25, 2019, 11:59:44 AM1/25/19
to Matt Farina, kubernetes-sig-cli
On Fri, Jan 25, 2019 at 5:50 PM Matt Farina <matt....@gmail.com> wrote:
I don't think that was mentioned before but it would be worth also to gather information
about version skew people are using, I mean kubectl vs apiserver, are they keeping up
to date in supported +/-1 or they are above that.

I realized it's something worth exploring while we're talking about version skew tests in other thread.

Great point. I have seen versions skews (minor level) of > 3 in the wild. In both directions. kubectl older than the API server in CI that was just not updated and newer kubectl (latest release) to older clusters/api servers.

I wonder how common this is or if people even think about it much.

Yeah, I'm very curious about that one too, even though we strictly specify +/- 1 in our docs, if we could encourage them
to explain why they're doing so it would be even better, because I'm very curious.

Maciej


Brian Grant

unread,
Jan 25, 2019, 12:40:45 PM1/25/19
to Maciej Szulik, Matt Farina, kubernetes-sig-cli
kubectl is often distributed independently of the server binaries. For instance, if you're using a managed Kubernetes service, you still have to get kubectl from somewhere. Keeping it up to date in all the places you use it (e.g., laptop, desktop, cloud shell, cloud VM) is challenging. 

Also, almost everyone has multiple clusters, even if just a local one (e.g., minikube) and a "real" one. If some of those clusters are managed by others (e.g., production operations team) and/or run across multiple managed environments (e.g, multi-cloud), then the release versions probably aren't going to match. So you need a single release version of kubectl to work against all of them.

We really need to make version skew a non-issue in kubectl. Any client should continue to work against newer Kubernetes releases within the bounds of the deprecation policy for the APIs it exercises -- so at least a year for any GA, non-deprecated APIs. And we should make a newer release of kubectl work against all supported older releases -- so currently at least N-2. Given that it also needs to work with arbitrary resource types (CRDs, aggregated types), all general-purpose functionality (get, describe, apply, create, etc.) needs to be purely dynamic, with no compiled-in Go resource types. Resource-specific functionality (e.g., create secret, create configmap, drain) should be designed to be tolerant of new API fields.

Given that it's a client tool rather than a highly available service, there's no good reason why we shouldn't be able to have an independent release of kubectl, potentially much more frequently than the main Kubernetes releases. But it would need to be less coupled to the main codebase, less coupled to the Go resource types in particular, and better test coverage.


Matt Farina

unread,
Jan 25, 2019, 1:30:26 PM1/25/19
to Brian Grant, Phillip Wittrock, kubernetes-sig-cli
Focusing on just kubectl is reasonable for a SIG CLI survey. It's a positive development for SIGs to want to gather more information about how what they are building is used.

Completely agree.

I think this survey could even test working with the end-user community through the newish CNCF person helping connect projects with them.
 

The high-level areas we need to explore are pretty basic:
  • What / how do users use kubectl for?
  • Which areas do users prefer other tools instead?
  • Does kubectl work sufficiently well with its uses and with those other tools?
  • What would users like it to do better?
I like these high level areas because they collect information, some of which we may not realize now.
 
So, not overlapping with SIG Apps, but more about context regarding how kubectl fits with the bigger picture for users, including tools and scenarios discussed in SIG Apps.

It may be useful to talk strategy in this space in SIG Apps. We could schedule time in the Feb 4th meeting. Thoughts?
 

The feedback from the SIG Apps survey results was somewhat useful (e.g. confirmation that several people like kubectx and kubens), but the open-ended nature of the questions made it fairly difficult to extract coherent signals.

Creating survey questions is hard. Especially when you want to collect data without leading the witness.

Do we have access to people with some more and better experience doing this?

Looking at the Stack Overflow survey that is out right now does provide some insight into how to have a handful of open questions while asking many that have specific choices.
--
Matt Farina

Go in Practice - A book of Recipes for the Go programming language.

Code Engineered - A blog on cloud, web, and software development.

Phillip Wittrock

unread,
Jan 25, 2019, 2:04:53 PM1/25/19
to Matt Farina, Brian Grant, kubernetes-sig-cli
Maciej - +1 to version skew semantics.  This is probably the biggest thing on my mind right now.  We really want everyone to be able to run the latest version of kubectl against whatever cluster they may be using.

Feb 4th works for me.  What sort of preparation do you think would make the discussion most effective?


Reply all
Reply to author
Forward
0 new messages