followup on declarative application management from today's SIG meeting

305 views
Skip to first unread message

Brian Grant

unread,
Feb 27, 2017, 2:33:58 PM2/27/17
to kubernetes-sig-apps
Starting a thread to catch post-meeting discussion.


Join this google group (kubernetes-sig-apps) to access them, as well as any past/future docs shared with the SIG. You can select to not receive email, if you like. That will also put the weekly meeting on your calendar. 

I didn't have time to show examples, but some things I mentioned:

Vertical (cpu, memory) pod autoscaling proposal: https://github.com/kubernetes/community/pull/338


Note the parameterization of resource names and storage capacity.


Note the parameterization of resource names, label values, and resource quantities.


Note the number of application parameters.

Also, compare the approaches to parameterization. The full list of all available parameters, their descriptions, default values (if any), required vs optional, and any validation info (e.g., data type, range or regexp) needs to be automation-friendly.

If you'd like to help fix bugs/issues with kubectl apply, see:

and reach out to SIG CLI.

Questions that came up:
  • Should forking be required? No, but it should be well supported, which at minimum means that it needs to be easy to find the source repository + branch + tag + subdirectory for a chart / template / package. As I mentioned, I've seen inheritance from shared repositories (most especially unversioned ones) go horribly wrong. It tends to push unnecessary complexity into shared configurations, since they are pushed to expand to support all possible use cases, puts production deployments at risk due to the need to fork in an emergency, and increases complexity of update flows, since both the configuration producer and consumer need to handle ~arbitrary configuration changes.
  • How could we better support creation of a template later? Export was mentioned, and that's one option, though it doesn't handle many cases, such as label templating, which I think would be better handled by application of stronger conventions up-front, encouraging a little pre-planning (e.g., forking), and providing tooling for manipulating common, stable API fields, both pre- and post-creation.
  • How could we unify some of the tools/approaches: helm, kpm, spread, Box's applier, Openshift templates, oc new-app, AppController, kubectl apply, etc.? That's one reason why I started this discussion. :-) I do think we could share more of the common components.
Thanks for listening. Hopefully it was useful. 

There have been literally dozens of configuration/deployment projects inside Google: numerous DSLs, dynamic configuration spreadsheets, configuration databases, application data push systems, overlay systems, rollout workflow orchestrators, ... I'd be happy to talk more about what we learned from them.

--Brian

Matt Farina

unread,
Feb 28, 2017, 10:26:15 AM2/28/17
to Brian Grant, kubernetes-sig-apps
Brian,

Thanks for sharing and starting a follow-up.

I have a couple questions targeted at experience.

First, there is the notion of a starting point from an existing project. You noted the practice of forking, using your own, and then using that. From here you have to sync with the upstream to pull in changes to your repo as needed. And, any changes you want you can make in your own setup.

This makes sense for you given the vendor everything strategy Google has.

But, is it the kind of simple a new developer wants? Does it make things easy for the long tail of dev? Does this fit the dependency handling strategies of most companies?

I ask this because the config handling is pushing k8s itself more and more into a competitive space with PaaS systems. That means competing on the experience.

I'm going to posit that many want something simple. For example, specifying the parent (say mysql), fill in their overrides, and then deploy that. Instead of doing any forking.

How does one do that reproducibly? I think that's a how question and reproducibility doing builds with distributed assets is something folks do today.

I'd argue we need something that has that simple experience with distributed assets and you can use it in a vendoring workflow.

Also, what can't be done with Helm today? What would need to change and why? What could be done with supporting tools? For example, you brought up the forking workflow and large repo of charts. What could make "forking" that possible and syncing updates? Are there git tools for that? I'm just thinking out loud.

Thanks for your thoughts on this.

- Matt


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/e79deab8-78b2-4492-a1fa-1995ad3a7317%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Matt Farina

Go in Practice - A book of Recipes for the Go programming language.

Engineered Web - A blog on cloud computing and web technologies.

Brian Grant

unread,
Feb 28, 2017, 1:15:22 PM2/28/17
to Matt Farina, kubernetes-sig-apps
On Tue, Feb 28, 2017 at 7:26 AM, Matt Farina <matt....@gmail.com> wrote:
Brian,

Thanks for sharing and starting a follow-up.

I have a couple questions targeted at experience.

First, there is the notion of a starting point from an existing project. You noted the practice of forking, using your own, and then using that. From here you have to sync with the upstream to pull in changes to your repo as needed. And, any changes you want you can make in your own setup.

This makes sense for you given the vendor everything strategy Google has.

Google's strategy is more build everything from scratch. :-)
 
But, is it the kind of simple a new developer wants?

My proposed template-modification tooling would make it unnecessary for new developers to manually write YAML resource manifests for a wide variety of common classes of applications, while helping them to learn the API and migrate to a declarative workflow, so that seems like a step forward from where we are today.
 
Does it make things easy for the long tail of dev?

Given sufficient tooling, I think so.
 
Does this fit the dependency handling strategies of most companies?

It's similar to what they'd do if they were using a configuration management system such as Chef, Puppet, Ansible, Salt, CFEngine, etc.

 

I ask this because the config handling is pushing k8s itself more and more into a competitive space with PaaS systems.

Yes and no. Kubernetes is intended to provide flexible infrastructure, not to provide a complete PaaS. Users want to run all classes of applications on the same infrastructure (one example).

That means competing on the experience.

As Brendan wrote, K8s makes it much easier to build very focused PaaSes, including DIY ones like Noel, so both general-purpose and custom tooling atop K8s (not unlike Helm) are ways to close the gap.
 

I'm going to posit that many want something simple. For example, specifying the parent (say mysql), fill in their overrides, and then deploy that. Instead of doing any forking.

The Service Catalog effort is intended to provide that. I expect there will be a Helm-backed Service Broker, as well as one for Openshift Templates.


How does one do that reproducibly? I think that's a how question and reproducibility doing builds with distributed assets is something folks do today.

I'd argue we need something that has that simple experience with distributed assets and you can use it in a vendoring workflow.

My proposed workflow was specifically targeted at "whitebox" configuration scenarios, where the user is responsible for managing the application.

The Service Catalog facilitates "blackbox" provisioning and binding, where the application management is sufficiently automated to not require user understanding and intervention, or is managed by an ops team. In the former case, this means that application lifecycle, including configuration, scaling, and updates, needs to be made worry-free, which is a high bar and non-trivial to implement. Vertical pod autoscaling, initContainers, and operators are steps in the right direction.
 
Also, what can't be done with Helm today? What would need to change and why?

I haven't had time to stay on top of recent Helm developments, but off the top of my head:
  • lack of explicit declaration of parameters
  • lack of support for a declarative workflow: helm install, upgrade, etc. are imperative
  • lack of support for bulk operations (AFAIK)
  • lack of support for a fork workflow
  • lack of support for pre-creation API-aware transformations
  • lack of examples of generic templates (e.g., a standard Ingress + Service + Deployment for a vanilla stateless app)
What could be done with supporting tools? For example, you brought up the forking workflow and large repo of charts. What could make "forking" that possible and syncing updates? Are there git tools for that? I'm just thinking out loud.

Filtering a subdirectory and pushing it to a different repo is fairly tedious to do by hand, but could be automated:
 

Thanks for your thoughts on this.

- Matt

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

gus...@gmail.com

unread,
Mar 6, 2017, 6:01:38 PM3/6/17
to kubernetes-sig-apps
I don't normally follow this SIG, but I felt https://github.com/anguslees/kubecfg might be interesting to others on this thread.  It's an unashamed borgcfg (from Google) clone, based around jsonnet.  An example annotated config file for squid is https://github.com/anguslees/kubecfg/tree/master/examples if you want to see what an example looks like - unfortunately a simple example doesn't illustrate library re-use at scale.  You can talk to your nearest (ex) Google SRE to find out all about the strengths and weaknesses of managing infrastructure through BCL, however.

The kubecfg tool in that repo is still in development, but internally at my employer we're already using a largish repo of configs and a merge robot that does some static tests and then runs everything through a very simple "jsonnet $file | kubectl apply --overwrite --file -" shell script.  The kubecfg tool in that repo is a WIP attempt at doing slightly better than the shell script (it topologically sorts resources before creating/updating, can do diffs and some other basic features).

 - Gus
Reply all
Reply to author
Forward
0 new messages