Containerization progress

54 views
Skip to first unread message

Péter Szilágyi

unread,
Nov 5, 2014, 9:33:21 AM11/5/14
to projec...@googlegroups.com
Hi all,

  Just wanted to show off some minor progress I've made in containerizing Iris if anyone's in the mood of playing with it.

  I've prepared automated trusted builds through the Docker registry for the previous two releases of Iris (v0.3.1 and v0.3.2). The images are based on debian 7, as these were the smallest I could find. The download size is around 50-60MB with the final image being at 109MB. You can pull via:

docker pull iris/iris-v0.3.2

  Additionally, I've prepared a rolling development image that follows the master branch of the Iris repository. Whenever a new commit is made, the build service is notified and a new docker container is prepared for it. This is at:

docker pull iris/iris-dev

  Since Iris requires free access to all kinds of random ports, as well as makes quite a lot of outbound connections, best best setup is not to firewall it and try and figure out how to route the connections, but rather to permit Iris to access the hosts network stack and use that directly. Hence run the containers with:

run --net="host" iris/iris-v0.3.2 -dev

A few ideas I'm looking forward to implement are:
  • Have an(optional) auto-restart mechanism if Iris crashes so that it won't require fancy docker configs to achieve.
  • Introduce a snake-oil key so that containers can be tested without RSA key configs.
Note, I haven't done almost any testing, just thought I'd share the progress as it's developing.

Cheers,
  Peter

PPS: Just realized that currently there is no simple way to inject an RSA key into the container. I'll try and figure something out, but in the meanwhile if anyone has any suggestions/ideas, I'm all ears.

Péter Szilágyi

unread,
Nov 7, 2014, 9:40:46 AM11/7/14
to projec...@googlegroups.com
Hey all,

  Next update on the docker containerization :) I've figured out how to inject RSA keys into the Iris containers (in a simple way).

  Just a side note, previously you had to do two things to use the Iris containers in non-developer more:
  • Copy an RSA private key into the container (various ways: build new image based off the iris one; use docker cp; use docker run with a mounted volume)
  • Had to specify the path to the uploaded key via the -rsa flag as before
  I've spent the better half of today trying to figure out how to solve this elegantly and although I've got a solution, it's still far from the best. But it works and is relatively simple. My main issue with the above solution was the redundancy and complexity of both uploading a key as well as configuring the key. The solution was to have a fixed pre-configured location for the private key (/iris.rsa) from where the container will try to load it. If it's not found, then you're back to the previous scenario (so you still have all the flexibility of configuring it as you like).

  The simplest way to get a private key from the host into that particular location is via a mounted file:

docker run --net="host" -v /path/to/key:/iris.rsa iris/iris-v0.3.2 -net mynet

  The added benefit of this solution is, that you can create an authenticated container by building one with the key already injected into the previously mentioned location (i.e.Dockerfile: FROM iris/iris-v0.3.2; ADD /path/to/key/iris.rsa).

  I've tried various other solutions too, but here's why they were discarded:
  • Pass the key through an env var. Although this looks ok, the key will almost always be in a file (be it bare metal, vm or coreos). Getting it into an env var is just another step that cannot be circumvented.
  • Load the key from a metadata server. I actually implemented this on Google Compute Engine and is a very very elegant solution (i.e. specify the key via -rsa=meta://iris.key). Unfortunately, only GCE supports custom metadata fields. The others either provide a single user-data field for the entire instance (Amazon, DigitalOcean), which is used for a gazzilion things, or has very very limited space allowance (i.e. 256 byte values, RackSpace). It somehow didn't feel right to have a feature running exclusively on GCE, so I decided to drop it.
  • Pass the key data itself in the argument. Well, yeah. Security, ugly, etc.
  I still haven't done much testing, but by running the above command on a single machine multiple times, the Iris nodes can successfully find each other :)

Cheers,
  Peter

Craig Wickesser

unread,
Dec 11, 2014, 6:10:34 AM12/11/14
to projec...@googlegroups.com
Which ports / protocols does iris need available? I'd like to try iris in docker containers on kubernetes and the "--net=host" option is not supported in kubernetes, so I'd have to resort to configuring iptables appropriately (which is fine, assuming I know the ports/protocols to allow).

Thanks.

Péter Szilágyi

unread,
Dec 11, 2014, 10:18:52 AM12/11/14
to Craig Wickesser, projec...@googlegroups.com
Hey Craig,

  In its original design (Run Forrest Run -> System Architecture), Iris was meant to run as a core process on the host machine, to which local smaller processes can attach. Because of this, I made certain assumptions that are not applicable in a containerized environment (Iris has a few predefined UDP ports for discovery, but uses ad-hoc TCP ports for data transfer; Iris opens a localhost exclusive port for client attachments). However, containerizing Iris leads to it being too isolated as a messaging middleware.

  In order to use Iris, you need to ensure that it's ports are fully forwarded by the local machine. Since it's currently using ad hoc ports, that is only possible by forwarding all. I don't know whether kubernetes supports such a thing however. Additionally, even if Iris is running, it will listen on localhsot only, hence limiting itself to the owning container, so clients in outside containers won't be able to find it.

  Currently I'm trying to decide which direction to take. I can either open up a lot of configuration possibility, but that would litter the CLI API quite heavily. Or I could instead try to detect the runtime environment and behave differently based on that (e.g. if inside a container, open the relay port on the public interface, not localhost; or if on a coreos host, open the relay on the internal docker interfaces too, not just localhost).

  However, for the moment, Iris will not run smoothly in a containerized environment.

Cheers,
  Peter

--
You received this message because you are subscribed to the Google Groups "Iris cloud messaging" group.
To unsubscribe from this group and stop receiving emails from it, send an email to project-iris...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages