How to start one container before staring another in the same pod?

1,282 views
Skip to first unread message

kant kodali

unread,
Apr 23, 2016, 11:42:21 PM4/23/16
to Containers at Google
Below is my sample config. And I am wondering if there a way to start service1 container before I start service2 container on the same pod?

apiVersion: v1
kind: Pod
metadata:
  name: myservices
  labels:
    app: platform
spec:
  containers:
    - name: backend1
      image: service1
      ports:
        - containerPort: 6379
    - name: backend2
      image: service2
      ports:
        - containerPort: 8000

Brendan Burns

unread,
Apr 24, 2016, 12:03:17 AM4/24/16
to Containers at Google

Kubernetes doesn't support this natively.

The easiest way to do it is to share a volume between the two containers in the pod and use a file as a sentinel.

Brendan


--
You received this message because you are subscribed to the Google Groups "Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-contain...@googlegroups.com.
To post to this group, send email to google-c...@googlegroups.com.
Visit this group at https://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.

Jay Vyas

unread,
Apr 24, 2016, 12:40:30 AM4/24/16
to google-c...@googlegroups.com
Beware this might be an antipattern.

Hmmm but The "kubernetes way", is sorta  to just configure the container restart policy and health checker so that they know what's going on, and how to respond, rather than declaring exactly the order of things.
.... Specifically I think in your case ....
- the restart policy combined with the declarative liveness probe can be used to guarantee that kubernetes knows the state of your containers health at all times, allowing you to build dumb web srv containers that just restart until the db service they need is valid and running , (enabling a good liveness probe let's kube know that things aren't "really running" yet).

kant kodali

unread,
Apr 24, 2016, 12:41:53 AM4/24/16
to Containers at Google
Hi Brendan,

Sorry I should have been a little bit more clear. Here is my scenario. My service 2 which is supposed to run on port 8000 looks for a process(service 1) that runs on 6379 before it gets started and in that sense my service 2 is dependent on service1. Also my service2 can accept the host and port of service 1 as arguments. 

kant kodali

unread,
Apr 24, 2016, 12:51:16 AM4/24/16
to Containers at Google
Forgot to mention. Please make no assumptions about the port numbers(Whether it is a db or webserver and so on). that was just my sample config. Please look at it as two processes running on specific ports and that one is dependent on another.

Tim Hockin

unread,
Apr 24, 2016, 1:00:33 AM4/24/16
to Containers at Google
On Sat, Apr 23, 2016 at 9:41 PM, kant kodali <kant...@gmail.com> wrote:
> Hi Brendan,
>
> Sorry I should have been a little bit more clear. Here is my scenario. My
> service 2 which is supposed to run on port 8000 looks for a process(service
> 1) that runs on 6379 before it gets started and in that sense my service 2
> is dependent on service1. Also my service2 can accept the host and port of
> service 1 as arguments.

What happens when service 1 crashes and restarts? Service 2 HAS to
handle it, right? So don't special-case startup. It's simply an
example of when service 2 must do something without service 1.

Building that startup dependency is an anti-pattern.

Brendan Burns

unread,
Apr 24, 2016, 1:23:36 AM4/24/16
to Containers at Google

Yeah, in this context of agree with Tim.  Your second service should be capable of handling the fact that it can't yet connect and wait for the service it wants to talk to to come up.  Your system still be more robust on general if you design it that way.

I'd go out on a limb a little and say there are some use cases where an 'init' container makes sense (eg schema migration) but it tends to be more of a 'run before' than a 'wait for dependencies' type case.

Brendan

kant kodali

unread,
Apr 24, 2016, 2:18:37 AM4/24/16
to Containers at Google
Good Advise! I can enhance my service down the road but what would be a quick fix with my scenario ?

kant kodali

unread,
Apr 24, 2016, 2:20:25 AM4/24/16
to Containers at Google
Can I leverage service discovery here somehow? I am assuming Kubernetes has something in regards to service discovery.

kant kodali

unread,
Apr 24, 2016, 2:24:58 AM4/24/16
to Containers at Google
Sorry Ignore my comment on service discovery since Kubernetes service is not equal to the micro service I am talking about but yes I am still looking for a quick fix for now and I will definetly take the advise and enhance it within few weeks down the road. sorry again for multiple emails!

kant kodali

unread,
Apr 24, 2016, 3:48:58 AM4/24/16
to Containers at Google

What happens when service 1 crashes and restarts?  Service 2 HAS to
handle it, right?  So don't special-case startup.  It's simply an
example of when service 2 must do something without service 1.

    In my case it will be fine because if service 1 crashes service 2 will have no idea that service 1 had crashed so service 2 will still be contacting same IP and port but the requests will fail which is OK in my case. And when we restart service 1 it will start on the same port and the requests from service 2 to service 1 will be fine this time but I do agree that it is a bad idea to have it as a startup dependency so given this scenario can I do something with Kubernetes?

Mark Petrovic

unread,
Apr 24, 2016, 8:06:39 AM4/24/16
to Containers at Google
Brendan, can you talk a bit more about this init container idea.  How would you arrange for it to "run before"?  

Brendan Burns

unread,
Apr 24, 2016, 11:04:45 AM4/24/16
to Containers at Google

The idea would be that the kubelet runs the init container to completion (eg return code zero) before starting any other containers in the pod.

Note this doesn't exist now, and it's not clear to everyone (or even me) that we definitely want to add it to k8s, but it has been discussed in the past.

You can now or less emulate this with a shared volume in a pod and synchronizing via a shared file.

Brendan

Mark Petrovic

unread,
Apr 24, 2016, 11:06:25 AM4/24/16
to google-c...@googlegroups.com
Could that shared volume be of type emptyDir?

On Sun, Apr 24, 2016 at 8:04 AM, 'Brendan Burns' via Containers at
> You received this message because you are subscribed to a topic in the
> Google Groups "Containers at Google" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/google-containers/JqvIuUmt5fk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> google-contain...@googlegroups.com.
> To post to this group, send email to google-c...@googlegroups.com.
> Visit this group at https://groups.google.com/group/google-containers.
> For more options, visit https://groups.google.com/d/optout.



--
Mark

Tim Hockin

unread,
Apr 24, 2016, 12:52:40 PM4/24/16
to Containers at Google
Yep! Just use a sentinel file in a shared emptyDir to control
intra-pod orchestration. I think we will add init containers - it's
shaping up to be an important part of the PetSet design (but I have
not read the latest updates to the proposal yet)

kant kodali

unread,
Apr 25, 2016, 6:40:15 AM4/25/16
to Containers at Google
Since I am new to Kubernetes and I don't really see this kind of thing in the Documentation. I wonder what I need to do accomplish this in a way that service2 will be started after service1? I have tried to some extent below.

apiVersion: v1
kind: Pod
metadata:
  name: myservices
  labels:
    app: platform
spec:
  containers:
    - name: backend1
      image: service1
      ports:
        - containerPort: 7000
      volumeMounts: 
      -mountPath: /shared/sentinel
       name: init

    - name: backend2
      image: service2
      ports:
        - containerPort: 8000
      volumeMounts: 
      -mountPath: /shared/sentinel
       name: init

  volume:
  -name: init
   emptyDir: ""

Mark Petrovic

unread,
Apr 25, 2016, 8:01:59 AM4/25/16
to google-c...@googlegroups.com
I recommend creating essentially all your container images with a
shell script entrypoint. I have found this to be a strong pattern
that comes in handy time and again for doing pre-start
housekeeping/setup. That entrypoint would start your app in the app
container when the sentinel appeared. A similar entrypoint in your
service image would start your service, and on conditions you define,
would write the sentinel:

Get started prototyping like this:

In your app container's entrypoint script

while [ ! -f /pathtosharedvolume/sentinel.dat ]; do sleep 5 && echo
waiting for file /pathtosharedvolume/sentinel.dat; done
exec startapp.sh

In your service container do something like this, assuming Redis is
the example service

sh startservice.sh &

# probe service and make sure you are satisfied with its state
echo "INFO" | redis-cli
# Server
redis_version:3.0.7
redis_git_sha1:00000000
...

if [ $? -eq 0 ]; then
touch /pathtosharedvolume/sentinel.dat
else
# wait longer or container did not start
fi

kant kodali

unread,
Apr 25, 2016, 2:52:46 PM4/25/16
to Containers at Google
Thanks a lot for this. Is there a way I can use  similar strategy with apiServer? forExample at the EntryPoint of my service2 image can I ask the apiServer for the status of the container 1 and if it is running launch service 2 otherwise sleep.

Mark Petrovic

unread,
Apr 25, 2016, 4:57:14 PM4/25/16
to google-c...@googlegroups.com
If you have Kubernetes setup correctly, you can leverage the master
API server from inside a container thusly:

curl http://kubernetes:port/api/v1/...

By setup correctly I mean the literal kubernetes hostname is backed by
a system Kubernetes Service. By querying the API server you can learn
things about any resource, including containers in a pod of interest.
This may get you there.

See http://kubernetes.io/kubernetes/third_party/swagger-ui/ for API
documentation.

kant kodali

unread,
Apr 26, 2016, 1:38:21 PM4/26/16
to Containers at Google
Ok I am not sure why I didn't have to do any of this (Using a file as a sentinel and so on). Here is my new config.yaml and just to remind again my service 2 which is supposed to run on port 8000 looks for a process(service 1) that runs on 7000 before it gets started and in that sense my service 2 is dependent on service1. I just issued kubectl create -f config-yaml and then ran kubectl get pods it said 2/2 running and then I did telnet <PodIP> 7000, telnet <PodIP> 8000 both reported as connected. I have also checked the restart count for both the containers it said 0. For a moment I thought I ran sequentially so I deleted the pod and created again and I still see the same result. so the question now is are the containers created sequentially as listed in the config.yaml file?

apiVersion: v1
kind: Pod
metadata:
  name: myservices
  labels:
    app: platform
spec:
  containers:
    - name: backend1
      image: service1
      ports:
        - containerPort: 7000
    - name: backend2
      image: service2
      ports:
        - containerPort: 8000

Derek Mahar

unread,
Sep 13, 2016, 4:47:26 PM9/13/16
to Kubernetes user discussion and Q&A, google-c...@googlegroups.com
As far as I understand, Kubernetes starts all containers in a pod at the same time.  In your example, it is possible that backend1 became available before backend2 by chance.  However, the reverse may also happen, in which case, backend2 would likely fail, causing Kubernetes to restart backend2 until backend1 finally becomes available.

What does the restart counter look like for container backend2 after each restart if you run the pod 10000 times?

Derek
Reply all
Reply to author
Forward
0 new messages