I am super new to kubernetes and I have a usecase where I would need to deploy a specific container(app) which does the following steps..
a) does some operations(business logic)
b) need to build an image from a Dockerfile and publish to registry(through API)
c) deploy POD(s) with that image.(create a deployment)(through API)
Basically, this app can be a master Pod of sorts and that creates a multi-container pod through API on the cluster.
If it is possible for that master pod/container(app) to create pods through API, I would just need to build image, push to private registry and then create a deployment on k8s from that app.
I did look into init containers, and the reason why I feel init containers cannot be used is because there will be good amount of business logic on the master POD(app).
I am planning to use the master POD as a way to manage the multi-container POD that it created along with other business logic.
Thanks
Hello all,
I am super new to kubernetes and I have a usecase where I would need to deploy a specific container(app) which does the following steps..
a) does some operations(business logic)
b) need to build an image from a Dockerfile and publish to registry(through API)
c) deploy POD(s) with that image.(create a deployment)(through API)
Basically, this app can be a master Pod of sorts and that creates a multi-container pod through API on the cluster.
If it is possible for that master pod/container(app) to create pods through API, I would just need to build image, push to private registry and then create a deployment on k8s from that app.
I did look into init containers, and the reason why I feel init containers cannot be used is because there will be good amount of business logic on the master POD(app).
On Friday, May 19, 2017 at 1:29:49 PM UTC+1, Rodrigo Campos wrote:
> On Friday, May 19, 2017, <morph...@gmail.com> wrote:
>
>
>
> I did look into init containers, and the reason why I feel init containers cannot be used is because there will be good amount of business logic on the master POD(app).
>
>
> Sorry, and why is that a problem? I don't follow
>
>
> Also, to understand better, if you weren't using kubernetes and just VMS with chef or puppet. What will you do?
I agree with you. Calling API from POD doesn't
seem right. Ideally, I have few steps and each steps would need to build image and update deployment. I want to sequentially deploy containers one after the other. I can choose to
use the same pod(update one deployment file) or create multiple.
And this sequential step are programmed in an app and that app runs inside a container.
Also, when I am building images, the app container would have few jars/files that will be added when building new images.
Let me look into jobs. Looks like something I could use.
I was using docker native API before to do these.
Speaking very generally, I don't think that using the Kubernetes API from a Pod is a bad idea per se. That's how most controllers operate, including third-party ones.
What the OP describes seems to be a kind of deployment service which, in my opinion, could be designed and implemented in terms of the controller (or possibly even operator) pattern.
Apologies for not giving the proper context. What I am trying to achieve here is,
In a typical CQRS scenario, we have our read and write models separately. So from event log, we process the data and update the read tables in the db.
We use kafka and we would have Kafka Streams Job running constantly, reading from topic and writing into topic after processing.
Topic --> read & process --> Topic
The way, kafka streams work is that, we deploy multiple instances of such job(jar) to load balance depending on the partitions.
If you're new to this, think the streams job to be a stateless application packed in containers and we increase to scale up.
lets assume, we do three steps in a sequential way..(for each read model)
(consider a read model to be different tables in db - accounts, users, clicksByUser etc.)
1. preprocessors
2. processors
3. postprocessors
So the data processing team would run a script which would build a master app with jars(above steps) and create a deployment in k8s. Master app(pod) to be deployed which would do these steps on by one, rollback if something fails, manage a bunch of these pre/post processors and finally deploy the read model.
I use multiple master apps for each read model, or have a single REST layer which maintains multiple read model pods, either way, I would like to dockerize my application that is making the call to k8s.
^if the above does not make sense, because it is quite complex, all I want to know is, if its fine to have a master POD of sorts that will create multiple PODs using the k8s API.