Simplify k8s application configuration/management

1265 views
Skip to first unread message

Antoine Legrand

unread,
Apr 4, 2017, 10:27:11 AM4/4/17
to kubernete...@googlegroups.com
Hi everyone,


At Kubecon, I have talked to many users, I had several stories about engineers who are trying to get internal adoption but the overall complexity was too high to convince their team/company.

The sig-cluster-lifecycle initiated Kubeadm to dramatically simplify the installation and cluster-management. Being approved, discussed and developed from/with the sig members, kubeadm is a success and reaching its goals.


I would like to see a similar project on the App side.


Writing  application configuration isn’t easy, over the year several new resources appeared and it’s going to continue (for the good).
To get something simple as a web-app, users have to write many and long yaml files: ‘deployment.yaml’ / ‘svc.yaml’ / ‘ingress.yaml’ / ‘network-policy.yaml’ / ‘configmap.yaml’ / ‘secret.yaml’, ‘db-deployment.yaml’, ‘db-pvc.yaml’, ‘db-secret.yaml’ …..


On top, there is NO tooling(auto-completion/static-analysis...) to help users.

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps (is there already a prototype?) and several projects are in development:

They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?



Antoine



Pradeepto Bhattacharya

unread,
Apr 4, 2017, 10:35:49 AM4/4/17
to Antoine Legrand, kubernete...@googlegroups.com
Hello everyone

Thank you Antoine for initiating this discussion and this detailed report. It was a very fruitful discussions at KubeCon and Sig- Apps meeting.

I can tell you that we (I speak for my group in Red Hat) are definitely interested in this project and the problem space. We have had related discussions about this problem in the past with Brian, Craig and others. Also we presented the idea at Kubernetes Dev Sprint. OpenCompose as it stands now is based on those discussions.

We honestly don't know if that is the correct solution. We are working hard on it and we plan to to regular demos at Kubernetes community meeting and sig-apps meeting. I have requested for a demo slot in the agenda document for a future meeting. I will quote Craig here - "let's kick the ball and see where it goes".

Having said that, I really like what Heptio has done with Kube.libjsonnet. But I am worried about jsonnet. I understand its power but it also will almost force people to learn another language. I really wish we can find a middle ground between YAML and jsonnet.

Also, I would be a bit worried about discussing for too long. It seems like a problem that many have been thinking about and discussing and many have tried to solve it in their own ways. My second wish would be get this done. I don't think there won't be any solution that would work 100% for all the use cases and we should always keep that in mind.

What would be the right forum for this discussion?

Let's kick the ball and see where it goes.

Regards,

Pradeepto

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/CADkr4xk4%2Bxgx%3DHBoLLeDfYT2Cii8YMPm5XxMN7Dr9wsgKhtcew%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
Pradeepto Bhattacharya

Michail Kargakis

unread,
Apr 4, 2017, 10:51:43 AM4/4/17
to Antoine Legrand, kubernete...@googlegroups.com
Worth noting that kubectl generators are trying to achieve the same goals, ie. make it easy for developers to get started w/o having to hand-craft YAML files.

For example:

kubectl run nginx --image=nginx --port=80 --expose

will create a Deployment and a Service.

We also have a bunch of kubectl create subcommands:

kubectl create deployment
kubectl create configmap
kubectl create secret

It would be great if the community would come up with a solution that would leverage the existing tools that are built in the core. Docs needs more love, and proposals
to make generators better are more than welcome!

Brian Grant

unread,
Apr 4, 2017, 5:26:19 PM4/4/17
to Michail Kargakis, Antoine Legrand, kubernetes-sig-apps, kubernete...@googlegroups.com
+SIG CLI

More comments later, but I wanted to loop in SIG CLI.

Note that there have been a number of requests for declarative interfaces to the generators, especially run, create secret, and create configmap





To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Alex Clemmer

unread,
Apr 4, 2017, 8:30:07 PM4/4/17
to Brian Grant, Michail Kargakis, Antoine Legrand, kubernetes-sig-apps, kubernete...@googlegroups.com
> They are all trying to do the same with different (or not so different)
> approaches and instead of continuing separately I would like to see a
> converged effort, to work and design it as a group.
>
>
> How can we progress on this topic ?

Our (i.e., Heptio's) approach has been to:
  1. pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written, and
  2. to reach out to stakeholders of related projects to build assent that the goals Antoine mentions are worth pursuing, and to see where collaboration makes sense
I think it is not controversial to say that none of the solutions Antoine mentions is going to be a silver bullet for all use cases (certainly this is true of `kube.libsonnet`), which is why (2) is an important goal for the group.

To this end, we met with Rick Spencer (who I am CC'ing here) and his team at Bitnami last week to begin the formation of a sort of "coalition of the willing" to address these pain points. We've also talked to Pradeep Battacharya (who works on OpenCompose), Fedor Korotkov (who has a Kotlin DSL[1] for the Kubernetes API), and we've reached out to William Butcher (who works on Helm).

Speaking only for myself, it seems to me that this is already fertile ground for collaboration. To me, people seem more interested in solving the problem than in promoting themselves, so I am optimistic that we can find a way to direct these efforts productively.

In my experience, these conversations do seem to go better when there is a specific technical artifact to discuss, and so my proposal for how to move forward is for individual teams to continue the ongoing consolidation effort, and then regroup in with sig-apps when more concrete progress has been made. In particular, I think it's worth blocking time off to talk specifically about the consolidated effort to build a solution that addresses these problems in the sig-apps meeting -- I'm happy to do this myself, or to have anyone in the "coalition" do it instead.

Let me know what you all think -- I'm happy to talk more if people have input.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

Alex Clemmer

unread,
Apr 4, 2017, 8:30:51 PM4/4/17
to kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Brian Grant, Rick Spencer
Ah, I meant to actually CC Rick. My mistake. :)

On Tue, Apr 4, 2017 at 5:30 PM, Alex Clemmer <al...@heptio.com> wrote:
> They are all trying to do the same with different (or not so different)
> approaches and instead of continuing separately I would like to see a
> converged effort, to work and design it as a group.
>
>
> How can we progress on this topic ?

Our (i.e., Heptio's) approach has been to:
  1. pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written, and
  2. to reach out to stakeholders of related projects to build assent that the goals Antoine mentions are worth pursuing, and to see where collaboration makes sense
I think it is not controversial to say that none of the solutions Antoine mentions is going to be a silver bullet for all use cases (certainly this is true of `kube.libsonnet`), which is why (2) is an important goal for the group.

To this end, we met with Rick Spencer (who I am CC'ing here) and his team at Bitnami last week to begin the formation of a sort of "coalition of the willing" to address these pain points. We've also talked to Pradeep Battacharya (who works on OpenCompose), Fedor Korotkov (who has a Kotlin DSL[1] for the Kubernetes API), and we've reached out to William Butcher (who works on Helm).

Speaking only for myself, it seems to me that this is already fertile ground for collaboration. To me, people seem more interested in solving the problem than in promoting themselves, so I am optimistic that we can find a way to direct these efforts productively.

In my experience, these conversations do seem to go better when there is a specific technical artifact to discuss, and so my proposal for how to move forward is for individual teams to continue the ongoing consolidation effort, and then regroup in with sig-apps when more concrete progress has been made. In particular, I think it's worth blocking time off to talk specifically about the consolidated effort to build a solution that addresses these problems in the sig-apps meeting -- I'm happy to do this myself, or to have anyone in the "coalition" do it instead.

Let me know what you all think -- I'm happy to talk more if people have input.



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

Brian Grant

unread,
Apr 4, 2017, 10:50:27 PM4/4/17
to Alex Clemmer, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
Nits: These don't have to be all separate files, and we could improve support for multiple files in kubectl.
 


On top, there is NO tooling(auto-completion/static-analysis...) to help users.


There could be a lot more, but there is validation based on Swagger/OpenAPI and also kubectl explain. Also, some of the config solutions are generated from Swagger/OpenAPI, much as the python client library is. I'd love to see the generated OpenAPI spec made correct and complete -- reach out to SIG API machinery if you'd like to help.

Another building block that all these tools need to work is kubectl apply -- reach out to SIG CLI if you'd like to help.
 

Template only based solutions are failing to solve this issue, in fact it adds more complexity to gain re-usability.  


Brian initiated a discussion about Declarative application configuration on sig-apps


And the related doc, Whitebox COTS application management (shared with kubernetes-dev, SIG apps, and SIG cli).

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.


(is there already a prototype?)


No prototype, at least not built by us. We don't have anyone available to work in this area.
 

and several projects are in development:



They are all trying to do the same with different (or not so different) approaches and instead of continuing separately I would like to see a converged effort, to work and design it as a group.


How can we progress on this topic ?


In SIG Config, we tried to encourage sharing of design approaches/ideas, use cases, examples, experience, opinions, etc. We didn't have critical mass then, but perhaps there is now. If there is sufficient interest, I'd start in SIG Apps and/or SIG CLI.

One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

gus...@gmail.com

unread,
Apr 5, 2017, 1:53:36 AM4/5/17
to kubernetes-sig-apps, al...@heptio.com, kubernete...@googlegroups.com, mkar...@redhat.com, antoine...@coreos.com, ri...@bitnami.com
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus

Alex Clemmer

unread,
Apr 5, 2017, 2:11:11 AM4/5/17
to gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

Brian Grant

unread,
Apr 5, 2017, 2:17:55 AM4/5/17
to Alex Clemmer, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written
 
On Tue, Apr 4, 2017 at 10:53 PM, <gus...@gmail.com> wrote:
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Brian Grant

unread,
Apr 5, 2017, 2:26:26 AM4/5/17
to Alex Clemmer, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Tue, Apr 4, 2017 at 11:17 PM, Brian Grant <brian...@google.com> wrote:
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written

Also, I'd be interested to hear user feedback on kube.libsonnet.

In the gitlab example, does the jsonnet implementation of the configuration do anything that the concrete API resource manifests do not?

 

 

On Tue, Apr 4, 2017 at 10:53 PM, <gus...@gmail.com> wrote:
On Wednesday, 5 April 2017 04:50:27 UTC+2, Brian Grant wrote:
Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built. We need one for addon management, if nothing else, and Box has an implementation that they could perhaps open source.

It's almost trivially obvious, but I have https://github.com/anguslees/kubecfg-updater fwiw.  It is literally a shell loop that updates a git checkout and then runs kubectl apply.  Improvements / suggestions for what more needs to be done are welcome.

(Note this simple approach works in my case partly because I add explicit namespace declarations to all my (generated) json - and enforce this in a unittest)

Earlier in the workflow I use jsonnet, some wrapper tools to expand the jsonnet, and do various client-side schema-validation, etc tests.  The review side is driven by github and jenkins.  I can try to publish / shrink-wrap some of that if people think any of it sounds useful to reuse.

 - Gus



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

Alex Clemmer

unread,
Apr 5, 2017, 3:01:41 PM4/5/17
to Brian Grant, gus...@gmail.com, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
I'd love to talk about partner feedback we've gotten for `kube.libsonnet`, but before we do, I want to make sure that we satisfy Antoine's goal, which seems to be to focus on the consolidation efforts. :)

Antoine (and anyone else who'd like to give input): I'd love your feedback on the progress towards consolidation so far, as well what you think is missing from the effort. What we've done so far should not be mistaken for the plan of record -- I just proceeded in the best way I knew how.

Rick Spencer

unread,
Apr 5, 2017, 3:08:57 PM4/5/17
to Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand
Hi Alex,

I appreciate you looping me in. I think perhaps a good first step is a meeting of the minds in terms of what people have developed so far. To that end, Gus (added to To:) is working this week to move his jsonnet library to a Bitnami repo, and then write some kind of blog post to go along with the documentation. 

My straw man proposal for a way forward: some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?

I'd be happy to facilitate such a call, depending on the timing. Thoughts?

Cheers, Rick

Antoine Legrand

unread,
Apr 5, 2017, 8:48:29 PM4/5/17
to Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis

Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though
we do need to figure out how to unify at least some of these efforts at some point.

Continuing work in parallel is essential, each tool are exploring different paths which is great to get a better view.

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built.

Yes, both(configuration / management) are parts of the project

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.
 

They target different use cases but in my opinion are complementary. if we agree on some technical designs and approaches, they can be developed separately and still be merged nicely on the user side random example:

  # Higher level API (opencompose / kubectl generator like)

  svc, ingress, deployment = createService(image: “myapp”, port: 80, domain: “myapp.example.com”)   

  # Extend if necessary (kube.libsonnet like)

  deployment.readinessProbe + probe.Http(port: 80, delay: 30, period: 10)

some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones) -- Antoine


Alex Clemmer

unread,
Apr 6, 2017, 2:55:55 AM4/6/17
to Antoine Legrand, Rick Spencer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I agree that a round table sounds like a good idea. Let me know what I can do to help (organizationally, logistically, etc.)

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 4:52:34 AM4/6/17
to Antoine Legrand, Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:

Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though
we do need to figure out how to unify at least some of these efforts at some point.

Continuing work in parallel is essential, each tool are exploring different paths which is great to get a better view.

Independent of the initial syntax / generation approach, a common declarative deployment flow could also be built.

Yes, both(configuration / management) are parts of the project


Completely agree with Antoine and Brian on the above points.
 

some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).


Regards,

Pradeepto
--
Pradeepto Bhattacharya

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 4:56:03 AM4/6/17
to Antoine Legrand, Rick Spencer, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Forgot to mention, I am more than happy and willing to organise this or help this meeting happen in any way I can.

Pradeepto
--
Pradeepto Bhattacharya

Pradeepto Bhattacharya

unread,
Apr 6, 2017, 5:08:35 AM4/6/17
to Brian Grant, Alex Clemmer, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand, Rick Spencer
On Wed, Apr 5, 2017 at 11:47 AM, 'Brian Grant' via kubernetes-sig-cli <kubernete...@googlegroups.com> wrote:
On Tue, Apr 4, 2017 at 11:11 PM, Alex Clemmer <al...@heptio.com> wrote:
One of the most important decisions in SIG cluster lifecycle was deciding what use case to focus on initially. kubeadm focused on simplifying the "getting-started experience" for building clusters from small numbers of pre-existing nodes. Work on other use cases continued in parallel, with kops, bootkube, kube-aws, kube-up, kargo, etc., though we do need to figure out how to unify at least some of these efforts at some point.

In this area, Open Compose and kube.libsonnet similarly seem to be targeted at different use cases.

I'm not sure this chasm is as wide as you seem to think. Both projects attempt to lower the skill floor of getting started with Kubernetes; OpenCompose accomplishes this by creating a new, higher-level API that maps to the Kubernetes API, while `kube.libsonnet` accomplishes this by making the Kubernetes API easier to deal with as it exists. In fact I will go one step farther, and say that I think the goal of creating a higher-level API is actually benefitted by strong templating primitives.

I was specifically referring to stated goals:

OpenCompose:
The main goal for OpenCompose is to be easy to use application/microservice definition that developers can use without learning much of Kubernetes concepts. It should be very easy to write a simple application definition and from there on the tooling takes over.

That is definitely our goal. We want OpenCompose to be simple to learn and use almost intuitive - that includes both the configuration and the management. We would love to have OpenCompose "language" to be robust but yet wouldn't want to make it yet another resource. Developer experience is of utmost importance. We would like to integrate with tools like IDEs etc.
 
kube.libsonnet:
pilot the `kube.libsonnet` solution with a set of partners who each have >10kloc of Kubernetes configuration already written

Having spoken to Alex couple of days back, I understand and highly respect what he is trying to do with kube.libsonnet. We definitely have some overlaps in our goals. 

I definitely can see a bunch of opportunities to collaborate, innovate and contribute.

Pradeepto
--
Pradeepto Bhattacharya

Rick Spencer

unread,
Apr 7, 2017, 12:26:05 PM4/7/17
to Pradeepto Bhattacharya, Antoine Legrand, Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 

Alex Clemmer

unread,
Apr 7, 2017, 12:29:31 PM4/7/17
to Rick Spencer, Pradeepto Bhattacharya, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
I won't be there but I'm happy to dial in.

Rick Spencer

unread,
Apr 7, 2017, 12:43:02 PM4/7/17
to Alex Clemmer, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis, Antoine Legrand
On Wed, Apr 5, 2017 at 3:08 PM, Rick Spencer <ri...@bitnami.com> wrote:
Hi Alex,

I appreciate you looping me in. I think perhaps a good first step is a meeting of the minds in terms of what people have developed so far. To that end, Gus (added to To:) is working this week to move his jsonnet library to a Bitnami repo, and then write some kind of blog post to go along with the documentation. 


If anyone wants to take a look at how we are doing things, Gus has cleaned up the code a bit and moved it to here:

Feedback and comments welcome here or in issues.

Cheers, Rick
 

Pradeepto Bhattacharya

unread,
Apr 7, 2017, 12:54:43 PM4/7/17
to Alex Clemmer, Rick Spencer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Pradeepto Bhattacharya

jhuf...@box.com

unread,
Apr 12, 2017, 1:51:45 PM4/12/17
to kubernetes-sig-apps, al...@heptio.com, ri...@bitnami.com, antoine...@coreos.com, g...@bitnami.com, brian...@google.com, gus...@gmail.com, kubernete...@googlegroups.com, mkar...@redhat.com, Sam Ghods
This is a worthy goal and a good discussion. At Box, we've definitely struggled and continue to struggle with this issue. I'd love to help with requirements and comments on proposals whenever people need it. We've talked to Red Hat (Clayton, et al.), Google (Phil Wittrock / Brian), and Heptio (Alex and Joe) at various levels of depth about our currently cobbled together solution.

Just to set up some context, at Box we have three main constituencies that we are trying to support via templating that I'd loosely categorize as:
  1. Service owners: They don't want to know jsonnet or even the k8s object model. They'd love to feed a docker image, env vars, config maps and secrets in and have a functioning service out the other side. We've largely failed to address this group's simplicity needs and they largely work by copying and pasting large kube API object templates for their app and tweak them as appropriate. They mostly find this overwhelming and we have a lot of divergence on best practices 
  2. Framework owners: (i.e. developers who build the frameworks that microservices can leverage to quickly develop new services) This set of users is concerned with a lot of the the pod itself, but doesn't really want to know about sidecars. They will setup templates for service owners which include things like command line flags, default resource limits, how to run the program, possible env vars, paths for logging, etc.
  3. Infrastructure owners: Theses include cluster maintainers and other infrastructure providers in the form of either daemon sets or sidecars. Log tailers, service discovery proxies, legacy dynamic configuration delivery are all owned at this level. These developers provide "pod parts" which can be integrated into pods configured by framework owners. We're aware of the fact that the latest practice is to dynamically inject these via pod presets. We have some concerns about losing the ability to statically validate our pods, but are open to it.
These artifacts from these three groups are all described using jsonnet based kube API object templates which are compiled down to json and applied via kubectl apply in a very fancy cron.

We largely follow these philosophical principles when designing our flows:
  1. Try to stay close to the kube API objects: Like Greenspun's tenth rule, we've noticed that any abstraction over the kube API objects tends to start simple and grow to become a bad version of the underlying kube API objects. We've largely given up on trying to abstract the API objects and instead present them to users with utility functions where appropriate. We may be moving to a model where framework owners provide templates and service owners just pass parameters, but don't have experience with this yet to see if it effectively abstracts or not. We don't like any of our answers here.
  2. Kube API objects as code: We manage all of our deployment configurations in a centralized git repo and will soon be pushing those definitions closer to the source code for the service. This allows service owners to easily introspect, modify and deploy their kubernetes configurations.
  3. Deploy as many changes as possible via a pipeline, ideally in an incremental way, affecting one pod at a time: We want individual changes to made by the three constituencies above to be deployed via our CD pipeline in Jenkins. Service owners' changes tend to impact an individual deployment whether it be a new Docker image or a new environment variable. We have about four main frameworks currently so changes by framework owners tend to affect about half of our deployments, while infra changes tend to affect all deployments on the floor. Instead of having an infrastructure or framework change affect all live services all at once, we'd prefer for teams to deploy the changes through their pipeline to maximize our automation's ability to catch problems. This also allows those owners to deploy at their convenience so that they can detect and respond to any impact from the change that wasn't caught by automation, ideally before it progresses beyond a small number of pods.
  4. Service owners, framework owners, infrastructure owners are fluid designations: Service owners don't want to care about log tailer sidecars and service discovery sidecars right up until the moment when they do. We would love to provide a facility that let's them ignore these details but can grow to allow them to specify their own sidecars if they need to. We haven't found a good way to do this and it may be impossible. The best options we have at the moment are use tightly parameterized templates if you can, and just deal with all of the k8s api objects if you can't. Heptio's kube.libsonnet improves things here, but makes people use jsonnet over plain-old-json.
  5. Favor sidecars over DaemonSets: One consequence of #3 and #4 is that we tend to like to have functionality bundled into a pod so that it can be replaced or deployed by some and not by others. One struggle here is how to compose a pod with main container(s) and 2-3 sidecars in a clean way. We could copy the preset model, but you can even see the complexity leaking into there (e.g. volume mounts go into all containers, volumes go into pod is more complex than "append this sidecar to the array"). Additionally a number of our sidecars need ConfigMaps to lay down files which involve creating a top level API object.
  6. Plain old JSON over full programming language: Every time we've tried using jsonnet functionality more expansively (loops, function calls, etc) we end up replacing it with copy and pasted json kube API objects. It is easier for our end users to understand and directly maps to the kubernetes documentation. IDE support can potentially help if a language library is used. But my gut feeling is that json plus simple merge algorithms a la apply and presets are going to be more comprehensible for developers.

We have yet to find a solution that solves all these issues but would look forward to working with projects or teams that want to help tackle them.



On Friday, April 7, 2017 at 9:54:43 AM UTC-7, Pradeepto Bhattacharya wrote:
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto
On Fri, Apr 7, 2017 at 9:59 PM, Alex Clemmer <al...@heptio.com> wrote:
I won't be there but I'm happy to dial in.

On Fri, Apr 7, 2017 at 9:25 AM, Rick Spencer <ri...@bitnami.com> wrote:

On Thu, Apr 6, 2017 at 4:51 AM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:


some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).



I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.



--
Pradeepto Bhattacharya

Rick Spencer

unread,
Apr 24, 2017, 10:45:42 AM4/24/17
to Pradeepto Bhattacharya, Alex Clemmer, Antoine Legrand, Angus Lees, Brian Grant, Angus Lees, kubernetes-sig-apps, kubernete...@googlegroups.com, Michail Kargakis
Following up on this thread, 

I have PMed with a few folks here and we have set out a call for 9am PST time tomorrow (April 25th). We scheduled it to be on a hangout. We scheduled it around the available of a few people's specific schedules, but we are happy to have anyone else interested join us, of course. I will set the hangout to be public.

Please let me know if you would like to attend or have any other suggestions. We will follow up here with the notes from the call just in case others are interested.

Cheers, Rick

On Fri, Apr 7, 2017 at 12:54 PM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Same here, I won't be at DockerCon. I can dial in as well. 


Pradeepto
On Fri, Apr 7, 2017 at 9:59 PM, Alex Clemmer <al...@heptio.com> wrote:
I won't be there but I'm happy to dial in.

On Fri, Apr 7, 2017 at 9:25 AM, Rick Spencer <ri...@bitnami.com> wrote:

On Thu, Apr 6, 2017 at 4:51 AM, Pradeepto Bhattacharya <prad...@redhat.com> wrote:
Hi,

On Thu, Apr 6, 2017 at 6:18 AM, Antoine Legrand <antoine...@coreos.com> wrote:


some kind of round table where we can discuss what problems we were trying to solve, and how our efforts to dates did or did not help achieve that effort. Perhaps, from that, we could extract the commonality in goals and approaches, and this could inform the next step in terms of writing code?


Sounds great ! thanks to propose it (generally between 9am and 10am PT works well across timezones)


Sounds great! We would love to be part of this round-table. Would be awesome if we can get this done sooner than later. How does sometime next week sound for all who would like to attend this? Please note, I am in IST (GMT +5:30).



I bet that some or many of us may be at DockerCon the week after next, and therefore some of us could meet face to face and dial in others for such a discussion. Thoughts? 



--

__
Transcribed by my voice-enabled refrigerator, please pardon chilly
messages.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsubscribe...@googlegroups.com.

To post to this group, send email to kubernetes-sig-apps@googlegroups.com.