Using consul-template when deploying Docker containers

3,163 views
Skip to first unread message

Lars Janssen

unread,
Jul 31, 2015, 9:43:28 AM7/31/15
to Consul
Hi all,

I have a fairly simple deploy workflow that uses Consul as a key/value store. Deployment of a web application works like this (simplified):

Fetch latest package into a temporary directory
Run consul-template with the "once" option to populate the config files
Move app into place, restart Apache

Now I am moving the application to Docker, I'm wondering what is the best way to achieve the same result?

Note that I'm not (yet) using Consul to dynamically update the config files or for service discovery, although these might come later. For now, I am just looking at the simplest use case of key/value replacement during deployment.

One option, if using docker-compose, is to use consul-template to generate the docker-compose.yml file, placing all the necessary configuration items into environment variables. When the Docker container starts up, let's say it runs a custom /start.sh inside the container, it can populate the application configuration files and start Apache.

Another option would be to run consul-template outside of the container, and mount the resulting configuration file into place as a volume. Then, when the application starts, the configuration file is already in place.

Both of these approaches have an annoying drawback: instead of just deploying the Docker image with the pre-built app, I would also need to deploy some configuration files (with consul-template placeholders).

Should I instead be making Consul available inside the container? Then, my /start.sh script inside the container can run "consul-template -once ..." to generate the configuration. If so, what's the best way to do this? Ideally I would run consul once per host, not once per container.

Thanks,

Lars.

Alvaro Miranda Aguilera

unread,
Jul 31, 2015, 9:37:52 PM7/31/15
to consu...@googlegroups.com
just to give you another option, have you looked into envconsul?

https://github.com/hashicorp/envconsul

With that you can populate variables based on those K/V

then you can run docker and pass those results to the command line..
and make the Dockerfile to work based on those variables.
> --
> This mailing list is governed under the HashiCorp Community Guidelines -
> https://www.hashicorp.com/community-guidelines.html. Behavior in violation
> of those guidelines may result in your removal from this mailing list.
>
> GitHub Issues: https://github.com/hashicorp/consul/issues
> IRC: #consul on Freenode
> ---
> You received this message because you are subscribed to the Google Groups
> "Consul" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to consul-tool...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/consul-tool/14ed7e2a-f403-4ef6-aa21-414f6b1539f0%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Lars Janssen

unread,
Aug 3, 2015, 12:29:04 PM8/3/15
to Consul
Thanks for your input. I've had a play with envconsul and if I understand correctly it's the same mechanism as consul-template, but one that manipulates env vars instead of outputting a template.

For now, as my application was not previously in Docker, it wasn't built to pick up everything from the environment. That can (probably should) be changed, although it's not always easy with legacy apps. At the very least, I could have a /start.sh that generates the necessary config file from the env vars.

Assuming I can use envconsul, it looks like I will need to launch a Docker container like this:

    envconsul -prefix foo/bar -consul 172.17.42.1:8500 \
        docker run -d my/image ...

In other words, envconsul needs to be installed on the host machine and the Consul KV store needs to be available to the host machine somehow (here I managed to get it bound to the internal Docker IP, but I'm not sure I'm doing this the best way).

Am I right in thinking the Docker container will get restarted automatically in the event that one of the watched keys (specified with -prefix) changes?

That would make sense to me, but it poses a few questions:

1. Avoiding downtime - even if it only takes a few seconds to re-spawn the Docker app process, that might be too much. Do I need to send a different signal to the Docker container and then make sure the app somehow picks that up and does a graceful reload from the inside?

2. Working like this might be difficult if using a tool like docker-compose, as I docker-compose itself will create the "docker run" command, and I haven't found a way to make it prefix this with another command like envconsul.

3. It would be difficult to work on a very generic host, i.e. one where the only application installed is Docker itself. I could invoke envconsul from within a Docker container (perhaps the same one that's running Consul) - but as the process I want to control is also a Docker container, then we have a docker-in-docker situation (possible, but I'd rather avoid).

I think if I can understand item 1 better (and overcome any problems) then items 2 and 3 are less critical.

Thanks,

Lars.

Michael Fischer

unread,
Aug 3, 2015, 1:12:02 PM8/3/15
to consu...@googlegroups.com
On Mon, Aug 3, 2015 at 9:29 AM, Lars Janssen <la...@fazy.net> wrote:
Thanks for your input. I've had a play with envconsul and if I understand correctly it's the same mechanism as consul-template, but one that manipulates env vars instead of outputting a template.

For now, as my application was not previously in Docker, it wasn't built to pick up everything from the environment. That can (probably should) be changed, although it's not always easy with legacy apps. At the very least, I could have a /start.sh that generates the necessary config file from the env vars.

This is what we do, and I recommend it.  You might also want to scrub sensitive environment variables in your start.sh script before exec'ing the application.

In general, I recommend treating Docker like a package manager, and using host networking whenever possible.  Once you do that, the remaining complication becomes moot, and you can treat your Docker app like any other program, although started with a different command line.  If you really need to bind to the same port in different containers, bind the application to an aliased IP interface if you can.  

Only if you can't do any of those things would I recommend using Docker's NAT, in my view, because everything just becomes so needlessly complex at that point.

Best regards,

--Michael

Lars Janssen

unread,
Aug 4, 2015, 4:07:54 AM8/4/15
to Consul
On Monday, 3 August 2015 18:12:02 UTC+1, Michael Fischer wrote:
For now, as my application was not previously in Docker, it wasn't built to pick up everything from the environment. That can (probably should) be changed, although it's not always easy with legacy apps. At the very least, I could have a /start.sh that generates the necessary config file from the env vars.

This is what we do, and I recommend it.  You might also want to scrub sensitive environment variables in your start.sh script before exec'ing the application.

Good to know that someone's using this method in practice. I'm just curious about the signalling though, does your application/container restart whenever one of the watched env keys gets changed in Consul?

In general, I recommend treating Docker like a package manager, and using host networking whenever possible.  Once you do that, the remaining complication becomes moot, and you can treat your Docker app like any other program, although started with a different command line.  If you really need to bind to the same port in different containers, bind the application to an aliased IP interface if you can.  

I hadn't noticed this option in Docker before, it does seem to simplify things a lot - a quick test shows that I can easily access the Consul KV store from the host, the container running Consul, and another container. So I'll definitely keep this option in mind, but sadly I have a legacy application where the requirement is to isolate as much as possible (without using a separate VM), so I'll need to fight with Docker networking a little more...

Thanks,

Lars.

 

Alvaro Miranda Aguilera

unread,
Aug 4, 2015, 5:34:09 AM8/4/15
to consu...@googlegroups.com
Whats the legacy app?
> --
> This mailing list is governed under the HashiCorp Community Guidelines -
> https://www.hashicorp.com/community-guidelines.html. Behavior in violation
> of those guidelines may result in your removal from this mailing list.
>
> GitHub Issues: https://github.com/hashicorp/consul/issues
> IRC: #consul on Freenode
> ---
> You received this message because you are subscribed to the Google Groups
> "Consul" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to consul-tool...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/consul-tool/85d457ec-a857-4eb3-b271-0751fc5362e6%40googlegroups.com.

Lars Janssen

unread,
Aug 4, 2015, 5:52:16 AM8/4/15
to Consul
On Tuesday, 4 August 2015 10:34:09 UTC+1, Alvaro Miranda Aguilera wrote:
Whats the legacy app?

Actually I was simplifying a bit, there are two legacy apps - one in PHP/Wordpress and one in PHP/no particular framework. Alongside that, a PHP/Symfony 2 app. The issue is, in the event of any security issues, we would want the minimum possible leakage into the other apps.

I'm still trying to get Consul working inside a Docker container while maintaining access to the KV store on port 8500 from outside that container.

The host is an Amazon instance with the same security groups and in the same VPC as a host that's been running a Consul client for several months. No issues there. Also, it seems, no issues if I bring all the Consul networking onto the host machine using "host" mode instead of the Docker bridge.

I'm using an image "test/ubuntu" built with this Dockerfile:

FROM ubuntu

RUN apt-get update \
    && apt-get -yq install \
        curl \
        unzip \
        jq \
    && rm -rf /var/lib/apt/lists/*

RUN cd /tmp \
    && tar xfz envconsul_0.5.0_linux_amd64.tar.gz \
    && mv envconsul_0.5.0_linux_amd64/envconsul /usr/local/bin \
    && rm /tmp/envconsul_0.5.0_linux_amd64.tar.gz \
    && rmdir /tmp/envconsul_0.5.0_linux_amd64

RUN cd /bin && unzip /tmp/consul.zip && chmod +x /bin/consul && rm /tmp/consul.zip


and here is a working docker-compose.yml (probably will revert to "docker run" commands in production, but I find this easier for quick tests):

consul:
    image: test/ubuntu
    command: consul agent -join consul.aws -data-dir /tmp
    net: host

ubuntu:
    image: test/ubuntu
    command: sleep 99999
    net: host


(consul.aws resolves to the three cluster servers elsewhere on the local network).

Using this I can access the KV store on the "ubuntu" container and on the host.

However, if I try this docker-compose.yml:

consul:
    image: test/ubuntu
    command: consul agent -join consul.aws.via-vox.net -data-dir /tmp -advertise 10.192.31.22
    ports:
      - "8300:8300"
      - "8301:8301"
      - "8301:8301/udp"
      - "8302:8302"
      - "8302:8302/udp"
      - "8400:8400"
      - "8500:8500"

ubuntu:
    image: test/ubuntu
    command: sleep 99999
    links:
      - consul


(10.192.31.2 is the host IP)

then I get this error (when running on the host):

curl: (52) Empty reply from server

while from within the other container:

docker exec test_ubuntu_1 curl http://consul:8500
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to consul port 8500: Connection refused


I have tried changing the last port line "8500:8500" to "10.192.31.2:8500:8500" (host IP) and "172.17.42.1:8500:8500" (Docker bridge IP) with similar results.

Note that while testing, I've run docker-compose with this command each time:

conntrack -D -p udp ; docker-compose up

because without clearing the UDP connections, I get a lot of "Refuting a suspect message" errors and the like. It seems to be related to this Docker issue: https://github.com/docker/docker/issues/8795

I can post more details/logs if needed or try something else.

Reverting to the host networking model (all Docker containers see the host network) might still be an option for me, but I'm not too sure. In any case, it seems like it should be possible to run Consul in a container and access port 8500 from the host or another container. Does it have any IP-based access restrictions (as it will be bound to 172.17.x.x but getting traffic from 10.x.x.x)?

Thanks,

Lars.

Alvaro Miranda Aguilera

unread,
Aug 4, 2015, 6:42:07 AM8/4/15
to consu...@googlegroups.com
On Tue, Aug 4, 2015 at 9:52 PM, Lars Janssen <la...@fazy.net> wrote:
> -advertise 10.192.31.22

Hello,

Seems you also need -bind 0.0.0.0 for the consul client

otherwise, it will bind to 127.0.0.1 if I recall correctly.

Lars Janssen

unread,
Aug 4, 2015, 1:02:14 PM8/4/15
to Consul
Seems you also need -bind 0.0.0.0 for the consul client

otherwise, it will bind to 127.0.0.1 if I recall correctly.

Thanks, I've just tried that and still connection refused from host machine and other container.

Also tried -bind 172.17.0.96 (which was to be the next IP assigned by Docker when started, and verified to be the case).

Although I wonder if it has definitely bound as expected. The client address shows 127.0.0.1 still:

consul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
consul_1 | ==> Starting Consul agent...
consul_1 | ==> Starting Consul agent RPC...
consul_1 | ==> Joining cluster...
consul_1 |     Join completed. Synced with 3 initial agents
consul_1 | ==> Consul agent running!
consul_1 |          Node name: '6c684c6e26c4'
consul_1 |         Datacenter: 'dc1'
consul_1 |             Server: false (bootstrap: false)
consul_1 |        Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
consul_1 |       Cluster Addr: 10.192.31.22 (LAN: 8301, WAN: 8302)
consul_1 |     Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 |              Atlas: <disabled>

The full command to launch is:

consul agent -join consul.aws.via-vox.net -data-dir /tmp -advertise 10.192.31.22 -bind 172.17.0.96

Thanks,

Lars.

Michael Fischer

unread,
Aug 4, 2015, 5:09:15 PM8/4/15
to consu...@googlegroups.com
On Tue, Aug 4, 2015 at 1:07 AM, Lars Janssen <la...@fazy.net> wrote:
On Monday, 3 August 2015 18:12:02 UTC+1, Michael Fischer wrote:
For now, as my application was not previously in Docker, it wasn't built to pick up everything from the environment. That can (probably should) be changed, although it's not always easy with legacy apps. At the very least, I could have a /start.sh that generates the necessary config file from the env vars.

This is what we do, and I recommend it.  You might also want to scrub sensitive environment variables in your start.sh script before exec'ing the application.

Good to know that someone's using this method in practice. I'm just curious about the signalling though, does your application/container restart whenever one of the watched env keys gets changed in Consul?

That is a good question.  Right now, we cannot use envconsul or consul-template to restart services automatically because they lack the ability to delay application reloads (beyond waiting for quiescence); synchronized reloads would cause a (temporary) outage that we cannot afford.

I've filed enhancement issues against both to add a splay option to each:


Until then, we have a few options:

* In your service wrapper, catch SIGTERM/SIGINT and then reload the managed program after a random delay
* Don't let consul-template run the program for you; run consul watch (or equivalent) out of band and send reload signals with it

Best regards,

--Michael

Lars Janssen

unread,
Aug 5, 2015, 8:22:05 AM8/5/15
to Consul
On Tuesday, 4 August 2015 18:02:14 UTC+1, Lars Janssen wrote:
Thanks, I've just tried that and still connection refused from host machine and other container.

I figured this out in the end; the needed option is actually -client, so the working command to run the agent is:

consul agent -join $CONSUL_ADDRESS -data-dir /tmp -advertise $HOST_IP$ -client 0.0.0.0

I also used the following port mapping:

"172.17.42.1:8500:8500"

This is the IP of the host machine within the docker0 network. I chose this because it's visible to the host and to all Docker containers, but not to other machines on the LAN.

On Tuesday, 4 August 2015 22:09:15 UTC+1, Michael Fischer wrote:
That is a good question.  Right now, we cannot use envconsul or consul-template to restart services automatically because they lack the ability to delay application reloads (beyond waiting for quiescence); synchronized reloads would cause a (temporary) outage that we cannot afford.

I've filed enhancement issues against both to add a splay option to each:


Until then, we have a few options:

* In your service wrapper, catch SIGTERM/SIGINT and then reload the managed program after a random delay
* Don't let consul-template run the program for you; run consul watch (or equivalent) out of band and send reload signals with it

All good points, although I haven't figured out yet how I will handle such restarts in future - I'm not far enough along yet! 

I might have been tempted to launch a new Docker instance each time and "cut over" by remapping the ports (something I might need to do for deployment anyway). But a signal/soft restart into the container might work better if it's only for the purpose of picking up Consul config changes.

For now, I still only use Consul key/values at the deployment stage, and in testing I've found two ways to run Docker with envconsul:

envconsul -prefix test docker run -d my_container

or 

envconsul -prefix test docker run my_container

(very much simplified; in fact I have a shell wrapper script with the "docker run" command, and "-e foo=$foo" env definitions).

The first one works only once, and suits my current workflow - because the "-d" option to Docker detaches into the background, and envconsul will then exit. The second way keeps envconsul running and restarts the Docker container each time.

Thanks,

Lars.

Reply all
Reply to author
Forward
0 new messages