Simplify k8s application configuration/management

1212 views
Skip to first unread message

Antoine Legrand

unread,
Apr 4, 2017, 10:27:11 AM4/4/17
to kubernete...@googlegroups.com
Hi everyone,


At Kubecon, I have talked to many users, I had several stories about engineers who are trying to get internal adoption but the overall complexity was too high to convince their team/company.

The sig-cluster-lifecycle initiated Kubeadm to dramatically simplify the installation and cluster-management. Being approved, discussed and developed from/with the sig members, kubeadm is a success and reaching its goals.


I would like to see a similar project on the App side.


Writing  application configuration isn’t easy, over the year several new resources appeared and it’s going to continue (for the good).
To get something simple as a web-app, users have to write many and long yaml files: ‘deployment.yaml’ / ‘svc.yaml’ / ‘ingress.yaml’ / ‘network-policy.yaml’ / ‘configmap.yaml’ / ‘secret.yaml’, ‘db-deployment.yaml’, ‘db-pvc.yaml’, ‘db-secret.yaml’ …..


On top, there is NO tooling(auto-completion/static-analysis...) to help users.

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps (is there already a prototype?) and several projects are in development:

They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?



Antoine



Pradeepto Bhattacharya

unread,
Apr 4, 2017, 10:35:49 AM4/4/17
to Antoine Legrand, kubernete...@googlegroups.com
Hello everyone

Thank you Antoine for initiating this discussion and this detailed report. It was a very fruitful discussions at KubeCon and Sig- Apps meeting.

I can tell you that we (I speak for my group in Red Hat) are definitely interested in this project and the problem space. We have had related discussions about this problem in the past with Brian, Craig and others. Also we presented the idea at Kubernetes Dev Sprint. OpenCompose as it stands now is based on those discussions.

We honestly don't know if that is the correct solution. We are working hard on it and we plan to to regular demos at Kubernetes community meeting and sig-apps meeting. I have requested for a demo slot in the agenda document for a future meeting. I will quote Craig here - "let's kick the ball and see where it goes".

Having said that, I really like what Heptio has done with Kube.libjsonnet. But I am worried about jsonnet. I understand its power but it also will almost force people to learn another language. I really wish we can find a middle ground between YAML and jsonnet.

Also, I would be a bit worried about discussing for too long. It seems like a problem that many have been thinking about and discussing and many have tried to solve it in their own ways. My second wish would be get this done. I don't think there won't be any solution that would work 100% for all the use cases and we should always keep that in mind.

What would be the right forum for this discussion?

Let's kick the ball and see where it goes.

Regards,

Pradeepto

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/CADkr4xk4%2Bxgx%3DHBoLLeDfYT2Cii8YMPm5XxMN7Dr9wsgKhtcew%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
Pradeepto Bhattacharya

Michail Kargakis

unread,
Apr 4, 2017, 10:51:43 AM4/4/17
to Antoine Legrand, kubernete...@googlegroups.com
Worth noting that kubectl generators are trying to achieve the same goals, ie. make it easy for developers to get started w/o having to hand-craft YAML files.

For example:

kubectl run nginx --image=nginx --port=80 --expose

will create a Deployment and a Service.

We also have a bunch of kubectl create subcommands:

kubectl create deployment
kubectl create configmap
kubectl create secret

It would be great if the community would come up with a solution that would leverage the existing tools that are built in the core. Docs needs more love, and proposals
to make generators better are more than welcome!

Brian Grant

unread,
Apr 4, 2017, 5:26:19 PM4/4/17
to Michail Kargakis, Antoine Legrand, kubernetes-sig-apps, kubernete...@googlegroups.com
+SIG CLI

More comments later, but I wanted to loop in SIG CLI.

Note that there have been a number of requests for declarative interfaces to the generators, especially run, create secret, and create configmap





To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Alex Clemmer

unread,
Apr 4, 2017, 8:30:07 PM4/4/17
to Brian Grant, Michail Kargakis, Antoine Legrand, kubernetes-sig-apps, kubernete...@googlegroups.com
> They are all trying to do the same with different (or not so different)
> approaches and instead of continuing separately I would like to see a
> converged effort, to work and design it as a group.
>
>
> How can we progress on this topic ?

Our (i.e., Heptio's) approach has been to:
  1. pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written, and
  2. to reach out to stakeholders of related projects to build assent that the goals Antoine mentions are worth pursuing, and to see where collaboration makes sense
I think it is not controversial to say that none of the solutions Antoine mentions is going to be a silver bullet for all use cases (certainly this is true of `kube.libsonnet`), which is why (2) is an important goal for the group.

To this end, we met with Rick Spencer (who I am CC'ing here) and his team at Bitnami last week to begin the formation of a sort of "coalition of the willing" to address these pain points. We've also talked to Pradeep Battacharya (who works on OpenCompose), Fedor Korotkov (who has a Kotlin DSL[1] for the Kubernetes API), and we've reached out to William Butcher (who works on Helm).

Speaking only for myself, it seems to me that this is already fertile ground for collaboration. To me, people seem more interested in solving the problem than in promoting themselves, so I am optimistic that we can find a way to direct these efforts productively.

In my experience, these conversations do seem to go better when there is a specific technical artifact to discuss, and so my proposal for how to move forward is for individual teams to continue the ongoing consolidation effort, and then regroup in with sig-apps when more concrete progress has been made. In particular, I think it's worth blocking time off to talk specifically about the consolidated effort to build a solution that addresses these problems in the sig-apps meeting -- I'm happy to do this myself, or to have anyone in the "coalition" do it instead.

Let me know what you all think -- I'm happy to talk more if people have input.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

Alex Clemmer

unread,
Apr 4, 2017, 8:30:51 PM4/4/17
to kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Brian Grant, Rick Spencer
Ah, I meant to actually CC Rick. My mistake. :)

On Tue, Apr 4, 2017 at 5:30 PM, Alex Clemmer <al...@heptio.com> wrote:
> They are all trying to do the same with different (or not so different)
> approaches and instead of continuing separately I would like to see a
> converged effort, to work and design it as a group.
>
>
> How can we progress on this topic ?

Our (i.e., Heptio's) approach has been to:
  1. pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written, and
  2. to reach out to stakeholders of related projects to build assent that the goals Antoine mentions are worth pursuing, and to see where collaboration makes sense
I think it is not controversial to say that none of the solutions Antoine mentions is going to be a silver bullet for all use cases (certainly this is true of `kube.libsonnet`), which is why (2) is an important goal for the group.

To this end, we met with Rick Spencer (who I am CC'ing here) and his team at Bitnami last week to begin the formation of a sort of "coalition of the willing" to address these pain points. We've also talked to Pradeep Battacharya (who works on OpenCompose), Fedor Korotkov (who has a Kotlin DSL[1] for the Kubernetes API), and we've reached out to William Butcher (who works on Helm).

Speaking only for myself, it seems to me that this is already fertile ground for collaboration. To me, people seem more interested in solving the problem than in promoting themselves, so I am optimistic that we can find a way to direct these efforts productively.

In my experience, these conversations do seem to go better when there is a specific technical artifact to discuss, and so my proposal for how to move forward is for individual teams to continue the ongoing consolidation effort, and then regroup in with sig-apps when more concrete progress has been made. In particular, I think it's worth blocking time off to talk specifically about the consolidated effort to build a solution that addresses these problems in the sig-apps meeting -- I'm happy to do this myself, or to have anyone in the "coalition" do it instead.

Let me know what you all think -- I'm happy to talk more if people have input.



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

Brian Grant

unread,
Apr 4, 2017, 10:50:27 PM4/4/17
to Alex Clemmer, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
Nits: These don't have to be all separate files, and we could improve support for multiple files in kubectl.
 


On top, there is NO tooling(auto-completion/static-analysis...) to help users.


There could be a lot more, but there is validation based on Swagger/OpenAPI and also kubectl explain. Also, some of the config solutions are generated from Swagger/OpenAPI, much as the python client library is. I'd love to see the generated OpenAPI spec made correct and complete -- reach out to SIG API machinery if you'd like to help.

Another building block that all these tools need to work is kubectl apply -- reach out to SIG CLI if you'd like to help.
 

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps


And the related doc, Whitebox COTS application management (shared with kubernetes-dev, SIG apps, and SIG cli).

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.


(is there already a prototype?)


No prototype, at least not built by us. We don't have anyone available to work in this area.
 

and several projects are in development:



They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?


In SIG Config, we tried to encourage sharing of design approaches/ideas, use cases, examples, experience, opinions, etc. We didn't have critical mass then, but perhaps there is now. If there is sufficient interest, I'd start in SIG Apps and/or SIG CLI.

One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

gus...@gmail.com

unread,
Apr 5, 2017, 1:53:36 AM4/5/17
to kubernetes-sig-apps, al...@heptio.com, kubernete...@googlegroups.com, mkar...@redhat.com, antoine...@coreos.com, ri...@bitnami.com
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus

Alex Clemmer

unread,
Apr 5, 2017, 2:11:11 AM4/5/17
to gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

Brian Grant

unread,
Apr 5, 2017, 2:17:55 AM4/5/17
to Alex Clemmer, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written
 
On Tue, Apr 4, 2017 at 10:53 PM, <gus...@gmail.com> wrote:
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Brian Grant

unread,
Apr 5, 2017, 2:26:26 AM4/5/17
to Alex Clemmer, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Tue, Apr 4, 2017 at 11:17 PM, Brian Grant <brian...@google.com> wrote:
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written

Also, I'd be interested to hear user feedback on kube.libsonnet.

In the gitlab example, does the jsonnet implementation of the configuration do anything that the concrete API resource manifests do not?

 

 

On Tue, Apr 4, 2017 at 10:53 PM, <gus...@gmail.com> wrote:
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Alex Clemmer

unread,
Apr 5, 2017, 3:01:41 PM4/5/17
to Brian Grant, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
I'd love to talk about partner feedback we've gotten for `kube.libsonnet`, but before we do, I want to make sure that we satisfy Antoine's goal, which seems to be to focus on the consolidation efforts. :)

Antoine (and anyone else who'd like to give input): I'd love your feedback on the progress towards consolidation so far, as well what you think is missing from the effort. What we've done so far should not be mistaken for the plan of record -- I just proceeded in the best way I knew how.

Rick Spencer

unread,
Apr 5, 2017, 3:08:57 PM4/5/17
to Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand
Hi Alex,

I appreciate you looping me in. I think perhaps a good first step is a meeting of the minds in terms of what people have developed so far. To that end, Gus (added to To:) is working this week to move his jsonnet library to a Bitnami repo, and then write some kind of blog post to go along with the documentation. 

My straw man proposal for a way forward: some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?

I'd be happy to facilitate such a call, depending on the timing. Thoughts?

Cheers, Rick

Antoine Legrand

unread,
Apr 5, 2017, 8:48:29 PM4/5/17
to Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis

Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though
we do need to figure out how to unify at least some of these efforts at some point.

Continuing work in parallel is essential, each tool are exploring different paths which is great to get a better view.

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built.

Yes, both(configuration / management) are parts of the project

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.
 

They target different use cases but in my opinion are complementary. if we agree on some technical designs and approaches, they can be developed separately and still be merged nicely on the user side random example:

  # Higher level API (opencompose / kubectl generator like)

  svc, ingress, deployment = createService(image: “myapp”, port: 80, domain: “myapp.example.com”)   

  # Extend if necessary (kube.libsonnet like)

  deployment.readinessProbe + probe.Http(port: 80, delay: 30, period: 10)

some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones) -- Antoine


Alex Clemmer

unread,
Apr 6, 2017, 2:55:55 AM4/6/17
to Antoine Legrand, Rick Spencer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I agree that a round table sounds like a good idea. Let me know what I can do to help (organizationally, logistically, etc.)

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 4:52:34 AM4/6/17
to Antoine Legrand, Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:

Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though
we do need to figure out how to unify at least some of these efforts at some point.

Continuing work in parallel is essential, each tool are exploring different paths which is great to get a better view.

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built.

Yes, both(configuration / management) are parts of the project


Completely agree with Antoine and Brian on the above points.
 

some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).


Regards,

Pradeepto
--
Pradeepto Bhattacharya

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 4:56:03 AM4/6/17
to Antoine Legrand, Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Forgot to mention, I am more than happy and willing to organise this or help this meeting happen in any way I can.

Pradeepto
--
Pradeepto Bhattacharya

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 5:08:35 AM4/6/17
to Brian Grant, Alex Clemmer, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Wed, Apr 5, 2017 at 11:47 AM, 'Brian Grant' via kubernetes-sig-cli <kubernete...@googlegroups.com> wrote:
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

That is definitely our goal. We want OpenCompose to be simple to learn and use almost intuitive - that includes both the configuration and the management. We would love to have OpenCompose "language" to be robust but yet wouldn't want to make it yet another resource. Developer experience is of utmost importance. We would like to integrate with tools like IDEs etc.
 
kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written

Having spoken to Alex couple of days back, I understand and highly respect what he is trying to do with kube.libsonnet. We definitely have some overlaps in our goals. 

I definitely can see a bunch of opportunities to collaborate, innovate and contribute.

Pradeepto
--
Pradeepto Bhattacharya

Rick Spencer

unread,
Apr 7, 2017, 12:26:05 PM4/7/17
to Pradeepto Bhattacharya, Antoine Legrand, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 

Alex Clemmer

unread,
Apr 7, 2017, 12:29:31 PM4/7/17
to Rick Spencer, Pradeepto Bhattacharya, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I won't be there but I'm happy to dial in.

Rick Spencer

unread,
Apr 7, 2017, 12:43:02 PM4/7/17
to Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand
On Wed, Apr 5, 2017 at 3:08 PM, Rick Spencer <ri...@bitnami.com> wrote:
Hi Alex,

I appreciate you looping me in. I think perhaps a good first step is a meeting of the minds in terms of what people have developed so far. To that end, Gus (added to To:) is working this week to move his jsonnet library to a Bitnami repo, and then write some kind of blog post to go along with the documentation. 


If anyone wants to take a look at how we are doing things, Gus has cleaned up the code a bit and moved it to here:

Feedback and comments welcome here or in issues.

Cheers, Rick
 

Pradeepto Bhattacharya

unread,
Apr 7, 2017, 12:54:43 PM4/7/17
to Alex Clemmer, Rick Spencer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Pradeepto Bhattacharya

jhuf...@box.com

unread,
Apr 12, 2017, 1:51:45 PM4/12/17
to kubernetes-sig-apps, al...@heptio.com, ri...@bitnami.com, antoine...@coreos.com, g...@bitnami.com, brian...@google.com, gus...@gmail.com, kubernete...@googlegroups.com, mkar...@redhat.com, Sam Ghods
This is a worthy goal and a good discussion. At Box, we've definitely struggled and continue to struggle with this issue. I'd love to help with requirements and comments on proposals whenever people need it. We've talked to Red Hat (Clayton, et al.), Google (Phil Wittrock / Brian), and Heptio (Alex and Joe) at various levels of depth about our currently cobbled together solution.

Just to set up some context, at Box we have three main constituencies that we are trying to support via templating that I'd loosely categorize as:
  1. Service owners: They don't want to know jsonnet or even the k8s object model. They'd love to feed a docker image, env vars, config maps and secrets in and have a functioning service out the other side. We've largely failed to address this group's simplicity needs and they largely work by copying and pasting large kube API object templates for their app and tweak them as appropriate. They mostly find this overwhelming and we have a lot of divergence on best practices 
  2. Framework owners: (i.e. developers who build the frameworks that microservices can leverage to quickly develop new services) This set of users is concerned with a lot of the the pod itself, but doesn't really want to know about sidecars. They will setup templates for service owners which include things like command line flags, default resource limits, how to run the program, possible env vars, paths for logging, etc.
  3. Infrastructure owners: Theses include cluster maintainers and other infrastructure providers in the form of either daemon sets or sidecars. Log tailers, service discovery proxies, legacy dynamic configuration delivery are all owned at this level. These developers provide "pod parts" which can be integrated into pods configured by framework owners. We're aware of the fact that the latest practice is to dynamically inject these via pod presets. We have some concerns about losing the ability to statically validate our pods, but are open to it.
These artifacts from these three groups are all described using jsonnet based kube API object templates which are compiled down to json and applied via kubectl apply in a very fancy cron.

We largely follow these philosophical principles when designing our flows:
  1. Try to stay close to the kube API objects: Like Greenspun's tenth rule, we've noticed that any abstraction over the kube API objects tends to start simple and grow to become a bad version of the underlying kube API objects. We've largely given up on trying to abstract the API objects and instead present them to users with utility functions where appropriate. We may be moving to a model where framework owners provide templates and service owners just pass parameters, but don't have experience with this yet to see if it effectively abstracts or not. We don't like any of our answers here.
  2. Kube API objects as code: We manage all of our deployment configurations in a centralized git repo and will soon be pushing those definitions closer to the source code for the service. This allows service owners to easily introspect, modify and deploy their kubernetes configurations.
  3. Deploy as many changes as possible via a pipeline, ideally in an incremental way, affecting one pod at a time: We want individual changes to made by the three constituencies above to be deployed via our CD pipeline in Jenkins. Service owners' changes tend to impact an individual deployment whether it be a new Docker image or a new environment variable. We have about four main frameworks currently so changes by framework owners tend to affect about half of our deployments, while infra changes tend to affect all deployments on the floor. Instead of having an infrastructure or framework change affect all live services all at once, we'd prefer for teams to deploy the changes through their pipeline to maximize our automation's ability to catch problems. This also allows those owners to deploy at their convenience so that they can detect and respond to any impact from the change that wasn't caught by automation, ideally before it progresses beyond a small number of pods.
  4. Service owners, framework owners, infrastructure owners are fluid designations: Service owners don't want to care about log tailer sidecars and service discovery sidecars right up until the moment when they do. We would love to provide a facility that let's them ignore these details but can grow to allow them to specify their own sidecars if they need to. We haven't found a good way to do this and it may be impossible. The best options we have at the moment are use tightly parameterized templates if you can, and just deal with all of the k8s api objects if you can't. Heptio's kube.libsonnet improves things here, but makes people use jsonnet over plain-old-json.
  5. Favor sidecars over DaemonSets: One consequence of #3 and #4 is that we tend to like to have functionality bundled into a pod so that it can be replaced or deployed by some and not by others. One struggle here is how to compose a pod with main container(s) and 2-3 sidecars in a clean way. We could copy the preset model, but you can even see the complexity leaking into there (e.g. volume mounts go into all containers, volumes go into pod is more complex than "append this sidecar to the array"). Additionally a number of our sidecars need ConfigMaps to lay down files which involve creating a top level API object.
  6. Plain old JSON over full programming language: Every time we've tried using jsonnet functionality more expansively (loops, function calls, etc) we end up replacing it with copy and pasted json kube API objects. It is easier for our end users to understand and directly maps to the kubernetes documentation. IDE support can potentially help if a language library is used. But my gut feeling is that json plus simple merge algorithms a la apply and presets are going to be more comprehensible for developers.

We have yet to find a solution that solves all these issues but would look forward to working with projects or teams that want to help tackle them.



On Friday, April 7, 2017 at 9:54:43 AM UTC-7, Pradeepto Bhattacharya wrote:
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto
On Fri, Apr 7, 2017 at 9:59 PM, Alex Clemmer <al...@heptio.com> wrote:
I won't be there but I'm happy to dial in.

On Fri, Apr 7, 2017 at 9:25 AM, Rick Spencer <ri...@bitnami.com> wrote:

On Thu, Apr 6, 2017 at 4:51 AM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:


some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).



I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.



--
Pradeepto Bhattacharya

Rick Spencer

unread,
Apr 24, 2017, 10:45:42 AM4/24/17
to Pradeepto Bhattacharya, Alex Clemmer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Following up on this thread, 

I have PMed with a few folks here and we have set out a call for 9am PST time tomorrow (April 25th). We scheduled it to be on a hangout. We scheduled it around the available of a few people's specific schedules, but we are happy to have anyone else interested join us, of course. I will set the hangout to be public.

Please let me know if you would like to attend or have any other suggestions. We will follow up here with the notes from the call just in case others are interested.

Cheers, Rick

On Fri, Apr 7, 2017 at 12:54 PM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto
On Fri, Apr 7, 2017 at 9:59 PM, Alex Clemmer <al...@heptio.com> wrote:
I won't be there but I'm happy to dial in.

On Fri, Apr 7, 2017 at 9:25 AM, Rick Spencer <ri...@bitnami.com> wrote:

On Thu, Apr 6, 2017 at 4:51 AM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:


some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).



I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.



--
Pradeepto Bhattacharya


Joe Miller

unread,
Apr 24, 2017, 1:25:21 PM4/24/17
to Rick Spencer, Pradeepto Bhattacharya, Alex Clemmer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I have been following but not participating in this thread so far. Our org faces some of the same issues. I am interested in participating in the call although mostly as an observer, so I am happy to defer to others who would be more active participants if there's an issue with a limit on the number of users on the hangout.

On Mon, Apr 24, 2017 at 7:45 AM Rick Spencer <ri...@bitnami.com> wrote:
Following up on this thread, 

I have PMed with a few folks here and we have set out a call for 9am PST time tomorrow (April 25th). We scheduled it to be on a hangout. We scheduled it around the available of a few people's specific schedules, but we are happy to have anyone else interested join us, of course. I will set the hangout to be public.

Please let me know if you would like to attend or have any other suggestions. We will follow up here with the notes from the call just in case others are interested.

Cheers, Rick
On Fri, Apr 7, 2017 at 12:54 PM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto
On Fri, Apr 7, 2017 at 9:59 PM, Alex Clemmer <al...@heptio.com> wrote:
I won't be there but I'm happy to dial in.

On Fri, Apr 7, 2017 at 9:25 AM, Rick Spencer <ri...@bitnami.com> wrote:

On Thu, Apr 6, 2017 at 4:51 AM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:


some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).



I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.



--
Pradeepto Bhattacharya


--
You received this message because you are subscribed to a topic in the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-sig-apps/cRpBZHZsnOo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/CAE_Xj4PUHNL3QN65duYy%3Deb%2BKQgtDTepFFx7B9%2BYL_qH98YNug%40mail.gmail.com.

Rick Spencer

unread,
Apr 24, 2017, 1:44:23 PM4/24/17
to Joe Miller, Pradeepto Bhattacharya, Alex Clemmer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Hi Joe,

I went ahead and sent you an invite. We're planning to use this hangout:

We'll be just accepting every request to join the hangout, so anyone can just show up if they want.

Cheers, Rick


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.



--
Pradeepto Bhattacharya


--
You received this message because you are subscribed to a topic in the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-sig-apps/cRpBZHZsnOo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Rick Spencer

unread,
Apr 24, 2017, 6:58:43 PM4/24/17
to Joe Miller, Pradeepto Bhattacharya, Alex Clemmer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Hello all,

In order to help the roundtable go more easily tomorrow, I gathered some input and put together a straw man proposal for the agenda. Please tell me what you think. Deletions, additions, modifications, all most welcome.

# Goals
Priority 1 Goal: identify subsets of attendees who wish to work together on common code bases.
Priority 2 Goal: identify next steps and success metrics for those steps.

If we achieve the first goal, I think that the time will have been very well spent.

# Landscape
 * Language
  * Some projects are simplifying yaml
  * Some projects are using jsonnet
 * Users
  * Some projects are focused on simplifying first use
  * Some projects are focused on sustainable workflow for complex deployments
  * Some projects are focused on chart maintainers

In general, the YAML approaches are targeting first use, and jsonnet approaches are targeting sustainable workflow. Though, Alex believes jsonnet can be used by both.

Assuming these dimensions even make sense, I propose that we start by putting the efforts into that matrix.

# Upstream
 * I think we should talk a bit about our vision for Helm (or really Tiller in my view, but we can discuss that)
 * How would such work impact Helm repos, if at all?

# Tooling
 * If we don't use Helm/Tiller, then there will still need to be some code to evaluate and apply changes. Should we standardize on that? Perhaps such a think in addition to a Tiller plugin would be useful.
 * Alex has been doing some interesting tooling work to make jsonnet easier to use, we may be able to talk him into doing a demo.

# Getting to Work
 * This part will be about identifying groups that would work together, and possible next steps and success metrics for those groups.
 
I'll create a shared Google Doc before the call so that we can collaborate there as well.

Cheers, Rick


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.



--
Pradeepto Bhattacharya


--
You received this message because you are subscribed to a topic in the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-sig-apps/cRpBZHZsnOo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

Brian Grant

unread,
Apr 25, 2017, 12:46:32 AM4/25/17
to jhuf...@box.com, kubernetes-sig-apps, Alex Clemmer, Rick Spencer, Antoine Legrand, g...@bitnami.com, Angus Lees, kubernete...@googlegroups.com, Michalis Kargakis, Sam Ghods, kubernetes-sig-service-catalog
+SIG Service Catalog, which is trying to address some of these challenges

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.



--
Pradeepto Bhattacharya

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

Charlie Drage

unread,
Apr 25, 2017, 8:08:26 AM4/25/17
to Brian Grant, jhuf...@box.com, kubernetes-sig-apps, Alex Clemmer, Rick Spencer, Antoine Legrand, g...@bitnami.com, Angus Lees, kubernete...@googlegroups.com, Michalis Kargakis, Sam Ghods, kubernetes-sig-service-catalog
Just as a reminder to everyone, it's 0900 PST / 1200 EST on https://hangouts.google.com/hangouts/_/bitnami.com/k8s-config?authuser=0
On Tue, Apr 25, 2017 at 12:46 AM, 'Brian Grant' via kubernetes-sig-apps <kubernete...@googlegroups.com> wrote:
+SIG Service Catalog, which is trying to address some of these challenges
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Rick Spencer

unread,
Apr 25, 2017, 5:06:11 PM4/25/17
to kubernete...@googlegroups.com, kubernetes-sig-service-catalog, kubernetes-sig-apps, Brian Grant, John Huffaker, Alex Clemmer, Antoine Legrand, Angus Lees, Angus Lees, Michalis Kargakis, Sam Ghods, Charlie Drage
We had the hangout as discussed today. There was a lot of interesting and useful technical discussion, but this email is to summarize on the organizational aspects that arose. Technical discussions to follow, but there are a lot of technical details captured in the notes [0].

In my view, there were two important sets of outcomes:

Outcome 1: We roughly on the following 3 positions:
1. We will settle on jsonnet because it is the best thing that we have, but we can and will change to a different templating language if one appears that is better suited.
2. We will strive to create a jsonnet library that can be used to create an accessible first time use experience, but that can be also expanded to support complex customer deployments. The way to accomplish this is by always building from the Kubernetes API.
3. We will develop “tooling” (such as IDE integration) to make jsonnet and the base-class library easier to use.

There was agreement that we could all move forward with these positions.

Outcome 2: We identified core areas of work to start on immediately.

1. Design and build a jsonnet library that is easy to use and understand and get started, but is demonstrably easy to expand all the way to the K8s API. This wills tart with a phase to compre individual projects and agree on the general approach.
(Gus, Alex, John, Pradeepto, Antoine)

2. Work on "IDE Integration" for jsonnet (statement completion, hover docs, syntax highlighting, etc...)
(Alex/vscode, Pradeepto/che)

3. Solution for "packaging"/sharing jsonnet code.
(Alex, Gus, Antoine)

4. Packaging jsonnet itself for distros.
(Gus/Debian, Pradeepto/RPM)

Additionally, we had two smaller tasks:

5. Create a github repo for collaboration, preferably under the kubernetes umbrella
(Antione, Rick)

6. Name the project
(all)

I did not manage to get a census of who attended the call. There were 15 people at the peek, I believe.

Please let me know if I left out something important or got something wrong.

Cheers, Rick


On Tue, Apr 25, 2017 at 8:08 AM, Charlie Drage <cha...@charliedrage.com> wrote:
Just as a reminder to everyone, it's 0900 PST / 1200 EST on https://hangouts.google.com/hangouts/_/bitnami.com/k8s-config?authuser=0

Brandon Philips

unread,
Apr 25, 2017, 7:31:28 PM4/25/17
to Rick Spencer, kubernete...@googlegroups.com, kubernetes-sig-service-catalog, kubernetes-sig-apps, Brian Grant, John Huffaker, Alex Clemmer, Antoine Legrand, Angus Lees, Angus Lees, Michalis Kargakis, Sam Ghods, Charlie Drage
On Tue, Apr 25, 2017 at 2:06 PM Rick Spencer <ri...@bitnami.com> wrote:
2. We will strive to create a jsonnet library that can be used to create an accessible first time use experience, but that can be also expanded to support complex customer deployments. The way to accomplish this is by always building from the Kubernetes API.

Hrm, I don't follow this. Can you explain a bit more?
 
1. Design and build a jsonnet library that is easy to use and understand and get started, but is demonstrably easy to expand all the way to the K8s API. This wills tart with a phase to compre individual projects and agree on the general approach.
(Gus, Alex, John, Pradeepto, Antoine)

A jsonnet library for Go? Or a library of base clases for Kubernetes?
 
4. Packaging jsonnet itself for distros.
(Gus/Debian, Pradeepto/RPM)

What about homebrew, etc?
 
Thank You,

Brandon

Alex Clemmer

unread,
Apr 25, 2017, 9:01:30 PM4/25/17
to Brandon Philips, Rick Spencer, kubernete...@googlegroups.com, kubernetes-sig-service-catalog, kubernetes-sig-apps, Brian Grant, John Huffaker, Antoine Legrand, Angus Lees, Angus Lees, Michalis Kargakis, Sam Ghods, Charlie Drage
I'll take a crack at answering some of these. I think if I screw
something up Rick will be sure to tell you. :)

>> 2. We will strive to create a jsonnet library that can be used to create
>> an accessible first time use experience, but that can be also expanded to
>> support complex customer deployments. The way to accomplish this is by
>> always building from the Kubernetes API.
>
>
> Hrm, I don't follow this. Can you explain a bit more?

I think this goal makes more sense in context. Basically, there are
currently two prevalent ways to write k8s app configurations: you can
use the raw k8s-API-conformant YAML, or you can use simpler,
higher-level APIs that map to the k8s API, like (e.g.) Kompose,
OpenCompose, compose2kube, and so on.

In my estimation (which is somewhat ham-handed, but I hope not too far
from the truth), the first approach has the advantage of completeness
and scalability as your app config becomes more complex, but is dense
and harder to learn; the second makes it easier to get started because
you have to know fewer things to get something up and running, but the
trade-off is that you have to learn the concepts later, and at least
understand the mapping from the new API to the k8s API as you demand
more complex configurations, if not (in the case of, e.g.,
compose2kube) simply port it to the k8s API later. (I hope this is a
fair characterization, but perhaps Pradeepto or other RedHat folks
have more opinions about this.)

Recognizing that both these approaches do have their advantages, this
work item is targeted at enabling both approaches, by simply make it
easier to deal with the k8s API objects themselves, in all their
complexity. The thought is that lowering the skill floor of using the
k8s API, and developing better (Jsonnet-based) tooling around it both
helps people develop higher-level APIs (like OpenCompose), while also
making it easier for people using the raw API to do their jobs.

If you are curious about how we are planning to measure the success of
these goals, I'm happy to talk about that as well.

>> 1. Design and build a jsonnet library that is easy to use and understand
>> and get started, but is demonstrably easy to expand all the way to the K8s
>> API. This wills tart with a phase to compre individual projects and agree on
>> the general approach.
>> (Gus, Alex, John, Pradeepto, Antoine)
>
>
> A jsonnet library for Go? Or a library of base clases for Kubernetes?

A Jsonnet library for Go. The terminology around having "base classes"
in the Jsonnet library comes from the fact that Jsonnet uses (and I
say this with love) the hipster OOP idea of a _mixin_.


I hope that makes things clearer. Let me know if you have follow-up questions.

Angus Lees

unread,
Apr 25, 2017, 11:52:47 PM4/25/17
to Alex Clemmer, Brandon Philips, Rick Spencer, kubernete...@googlegroups.com, kubernetes-sig-service-catalog, kubernetes-sig-apps, Brian Grant, John Huffaker, Antoine Legrand, Michalis Kargakis, Sam Ghods, Charlie Drage
On Wed, 26 Apr 2017 at 11:01 Alex Clemmer <al...@heptio.com> wrote:
>> 1. Design and build a jsonnet library that is easy to use and understand
>> and get started, but is demonstrably easy to expand all the way to the K8s
>> API. This wills tart with a phase to compre individual projects and agree on
>> the general approach.
>> (Gus, Alex, John, Pradeepto, Antoine)
>
> A jsonnet library for Go? Or a library of base clases for Kubernetes?

A Jsonnet library for Go. The terminology around having "base classes"
in the Jsonnet library comes from the fact that Jsonnet uses (and I
say this with love) the hipster OOP idea of a _mixin_.

To be super-clear: this is a library *of jsonnet files* that can go from some high-level description (like the opencompose representation) to regular k8s API objects in jsonnet.  Aiui, this has already been done for opencompose, so the work here is more about exploring the ergonomics of combining that jsonnet code with something "lower level" like Alex's mixins.

Again, the hope here is that by expanding to the "true" k8s objects (in jsonnet), we end up with a common representation that we can share/manipulate across jsonnet libraries and hopefully cover both ends of the usability spectrum with a common toolchain/workflow.

*Separately* we touched on how great it would be if there was a golang implementation of the jsonnet engine and some other improvements we'd like to make to the existing jsonnet upstream codebase (eg: expose AST).  This was further down our list of priorities at this early stage (aiui) and was separate to the above action item.

On Tue, Apr 25, 2017 at 4:31 PM, Brandon Philips
<brandon...@coreos.com> wrote:
> On Tue, Apr 25, 2017 at 2:06 PM Rick Spencer <ri...@bitnami.com> wrote:
>> 4. Packaging jsonnet itself for distros.
>> (Gus/Debian, Pradeepto/RPM)
>
> What about homebrew, etc?

I think it's already in homebrew, but yes absolutely: contributions welcome!  The observation was that the regular upstream jsonnet project isn't packaged for the major Linux distros, and this is one additional hurdle to asking people to use jsonnet.  By coincidence we had some Debian and Redhat folks on the call - but if anyone else is in a position to package up a straightforward C/C++ library with minimal dependencies (iirc it has python bindings in the same upstream codebase too) for their favourite code sharing repository, we would be grateful if they would do so.

 - Gus

Alex Clemmer

unread,
Apr 26, 2017, 12:08:07 AM4/26/17
to Angus Lees, Brandon Philips, kubernete...@googlegroups.com, kubernetes-sig-apps, Michalis Kargakis, Rick Spencer, Brian Grant, John Huffaker, Charlie Drage, Antoine Legrand, kubernetes-sig-service-catalog, Sam Ghods
Oh, shoot. I see now that I wrote "a jsonnet library for go" which is not what I intended. Gus's explanation is far the clearer. Sorry. 

--

Transcribed by my voice-enabled refrigerator, please pardon chilly messages.


From: Angus Lees <gus...@gmail.com>
Sent: Tuesday, April 25, 2017 8:52:35 PM
To: Alex Clemmer; Brandon Philips
Cc: Rick Spencer; kubernete...@googlegroups.com; kubernetes-sig-service-catalog; kubernetes-sig-apps; Brian Grant; John Huffaker; Antoine Legrand; Michalis Kargakis; Sam Ghods; Charlie Drage
Subject: Re: Simplify k8s application configuration/management
 

al...@cobrowser.net

unread,
Apr 29, 2017, 9:45:41 AM4/29/17
to kubernetes-sig-apps
Generating yamls is great and we have have tool for that; very simple yaml files with placeholder and a bit of SED. So any of these approaches would be a big step forward but also require a bigger learning curve.

Use case: In this particular project i needed to replicate application stacks across different clusters to achieve a private version of the multi tenant stack. So we needed a tool to keep most of the yamls the same, with small differences in things like ENV vars. Every environment has a 'manifest' that describes the branch/commit/tag of each container in the stack. CI/CD propagates a commit from the testing manifest to the staging and then prod manifests, when tests succeed.

In my experience, the generating of the files happens infrequently, at the beginning of a new part of the stack. I don't mind doing 'some' manual work to set up a config that works across all the different clusters. What I need multiple times a day is a way to take this manifest and compare it to the cluster and update what is necessary. We do this with an 'apply.rb' that interrogates the cluster for the current state, compares with the manifest and deploys a new container if needed. TODO also take yaml changes into consideration and kubectl apply those. 

I was trying to find out if any of these tools take this extra step or is that something out of scope?


On Tuesday, April 4, 2017 at 4:27:11 PM UTC+2, Antoine Legrand wrote:
Hi everyone,


At Kubecon, I have talked to many users, I had several stories about engineers who are trying to get internal adoption but the overall complexity was too high to convince their team/company.

The sig-cluster-lifecycle initiated Kubeadm to dramatically simplify the installation and cluster-management. Being approved, discussed and developed from/with the sig members, kubeadm is a success and reaching its goals.


I would like to see a similar project on the App side.


Writing  application configuration isn’t easy, over the year several new resources appeared and it’s going to continue (for the good).
To get something simple as a web-app, users have to write many and long yaml files: ‘deployment.yaml’ / ‘svc.yaml’ / ‘ingress.yaml’ / ‘network-policy.yaml’ / ‘configmap.yaml’ / ‘secret.yaml’, ‘db-deployment.yaml’, ‘db-pvc.yaml’, ‘db-secret.yaml’ …..


On top, there is NO tooling(auto-completion/static-analysis...) to help users.

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps (is there already a prototype?) and several projects are in development:

They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?



Antoine



Michail Kargakis

unread,
Apr 29, 2017, 10:30:59 AM4/29/17
to al...@cobrowser.net, kubernetes-sig-apps
On Sat, Apr 29, 2017 at 3:45 PM, <al...@cobrowser.net> wrote:
Generating yamls is great and we have have tool for that; very simple yaml files with placeholder and a bit of SED. So any of these approaches would be a big step forward but also require a bigger learning curve.


We have kubectl commands that help with generating manifests (kubectl run, kubectl expose, kubectl autoscale, kubectl create {deployment, configmap, and others}. I would love to see more contributions on that front.
 
Use case: In this particular project i needed to replicate application stacks across different clusters to achieve a private version of the multi tenant stack. So we needed a tool to keep most of the yamls the same, with small differences in things like ENV vars. Every environment has a 'manifest' that describes the branch/commit/tag of each container in the stack. CI/CD propagates a commit from the testing manifest to the staging and then prod manifests, when tests succeed.


This seems to me like a use-case for exporting manifests. We have preliminary support for that via `kubectl get --export` but more work needs to be done[1][2][3][4][5]. Hopefully [6] will address most if not all of the existing issues.

[1] https://github.com/kubernetes/kubernetes/issues/24855
[2] https://github.com/kubernetes/kubernetes/issues/21582
[3] https://github.com/kubernetes/kubernetes/issues/19501
[4] https://github.com/kubernetes/kubernetes/issues/33767
[5] https://github.com/kubernetes/kubernetes/issues/41880
[6] https://github.com/kubernetes/community/pull/123

In my experience, the generating of the files happens infrequently, at the beginning of a new part of the stack. I don't mind doing 'some' manual work to set up a config that works across all the different clusters. What I need multiple times a day is a way to take this manifest and compare it to the cluster and update what is necessary. We do this with an 'apply.rb' that interrogates the cluster for the current state, compares with the manifest and deploys a new container if needed. TODO also take yaml changes into consideration and kubectl apply those. 

I was trying to find out if any of these tools take this extra step or is that something out of scope?



On Tuesday, April 4, 2017 at 4:27:11 PM UTC+2, Antoine Legrand wrote:
Hi everyone,


At Kubecon, I have talked to many users, I had several stories about engineers who are trying to get internal adoption but the overall complexity was too high to convince their team/company.

The sig-cluster-lifecycle initiated Kubeadm to dramatically simplify the installation and cluster-management. Being approved, discussed and developed from/with the sig members, kubeadm is a success and reaching its goals.


I would like to see a similar project on the App side.


Writing  application configuration isn’t easy, over the year several new resources appeared and it’s going to continue (for the good).
To get something simple as a web-app, users have to write many and long yaml files: ‘deployment.yaml’ / ‘svc.yaml’ / ‘ingress.yaml’ / ‘network-policy.yaml’ / ‘configmap.yaml’ / ‘secret.yaml’, ‘db-deployment.yaml’, ‘db-pvc.yaml’, ‘db-secret.yaml’ …..


On top, there is NO tooling(auto-completion/static-analysis...) to help users.

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps (is there already a prototype?) and several projects are in development:

They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?



Antoine



--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Angus Lees

unread,
Apr 30, 2017, 8:19:40 PM4/30/17
to al...@cobrowser.net, kubernetes-sig-apps
On Sat, 29 Apr 2017 at 23:45 <al...@cobrowser.net> wrote:
Generating yamls is great and we have have tool for that; very simple yaml files with placeholder and a bit of SED. So any of these approaches would be a big step forward but also require a bigger learning curve.

Use case: In this particular project i needed to replicate application stacks across different clusters to achieve a private version of the multi tenant stack. So we needed a tool to keep most of the yamls the same, with small differences in things like ENV vars. Every environment has a 'manifest' that describes the branch/commit/tag of each container in the stack. CI/CD propagates a commit from the testing manifest to the staging and then prod manifests, when tests succeed.

In my experience, the generating of the files happens infrequently, at the beginning of a new part of the stack. I don't mind doing 'some' manual work to set up a config that works across all the different clusters. What I need multiple times a day is a way to take this manifest and compare it to the cluster and update what is necessary. We do this with an 'apply.rb' that interrogates the cluster for the current state, compares with the manifest and deploys a new container if needed. TODO also take yaml changes into consideration and kubectl apply those. 

I was trying to find out if any of these tools take this extra step or is that something out of scope?

The nice thing about capturing the full configuration in files (however those files are maintained), is that you can always just replace what's in k8s with those files - and then leave it up to k8s to work out if there has actually been any change since the last version.  So: Unless I have misunderstood something about your situation, I think you're overthinking it and you should just always run `kubectl apply` or some equivalent without looking at what's on the server first.

Some existing tools that do this:
- Our very simple shell script: https://github.com/bitnami/kube-manifests/blob/master/tools/deploy.sh and the nearby kubecfg.sh (basically just a recursive `jsonnet | kubectl apply`).  We run this as a "push" from jenkins to each of our clusters on every commit to our config repo.
https://hub.docker.com/r/anguslees/kubecfg-updater/ that does a periodic "pull" version of the above from a git repo and pushes to the local cluster if you prefer that permissions model.  kubecfg-updater is *real* simple - Box's https://github.com/box/kube-applier looks like a more featureful replacement.
- A PoC replacement to something wrapping `kubectl apply` is https://github.com/anguslees/kubecfg - it adds a very limited set of features over the naive `kubecfg.sh` but notably it (deliberately) does a regular json-patch rather than the strategic-merge so a few of the corner cases are different.

These are pointers to tools I've been directly involved with, and I don't mean to limit it to just those.   My point is: yes, a number of tools do what you are describing by just pushing everything to k8s.

The above *assumes the config files are a single source of truth*.  If you're trying to mash together files and live config and you have some custom policy that determines what should override what at whatever times in the lifecycle, then suddenly you need to write a custom tool to reflect your highly custom policy.  (I kind of see this as a "Doctor, it hurts when I ..." problem however: Just don't do that :P   From my experience, you can remove a lot of complexity by making your config/git repo a single source of truth and modify your other flows to interact via that git repo rather than banging on the k8s API directly)

 - Gus

On Tuesday, April 4, 2017 at 4:27:11 PM UTC+2, Antoine Legrand wrote:
Hi everyone,


At Kubecon, I have talked to many users, I had several stories about engineers who are trying to get internal adoption but the overall complexity was too high to convince their team/company.

The sig-cluster-lifecycle initiated Kubeadm to dramatically simplify the installation and cluster-management. Being approved, discussed and developed from/with the sig members, kubeadm is a success and reaching its goals.


I would like to see a similar project on the App side.


Writing  application configuration isn’t easy, over the year several new resources appeared and it’s going to continue (for the good).
To get something simple as a web-app, users have to write many and long yaml files: ‘deployment.yaml’ / ‘svc.yaml’ / ‘ingress.yaml’ / ‘network-policy.yaml’ / ‘configmap.yaml’ / ‘secret.yaml’, ‘db-deployment.yaml’, ‘db-pvc.yaml’, ‘db-secret.yaml’ …..


On top, there is NO tooling(auto-completion/static-analysis...) to help users.

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps (is there already a prototype?) and several projects are in development:

They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?



Antoine



--
You received this message because you are subscribed to a topic in the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-sig-apps/cRpBZHZsnOo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.

Angus Lees

unread,
Apr 30, 2017, 8:31:45 PM4/30/17