[erlang-questions] How do you go to production?

77 views
Skip to first unread message

Max Lapshin

unread,
Apr 14, 2019, 5:20:09 AM4/14/19
to Erlang-Questions Questions
Hi.

I'm developing Flussonic for almost 10 years and we have some practices for packaging, deploying, running, maintainig that are not well known, however they are rather good for us.

It is interesting how do other people solve this.

At first, we do not use releases. It is because when we have started our long path, using of releases was not very easy.  Couple of days ago I've tested it with rebar3 and it is really easier to use. Perhaps we would use releases today.




Next: we use our own fpm script replacement for packaging. I can boast that it seems to be the only existing implementation of rpm outside of original library, however I'd better never pass this path again. I've written it in pre-docker era and frankly speaking it is a traumatic experience. However, we are using it for debian and it is really convenient:


Some time ago we have switched to systemd. I personally consider systemd a very badly designed thing that was created without any discussions with existing system adminstrators. For example, systemd doesn't offer config validation before launch. Another brilliant idea is to offer libsystemd for linking into application. Unknown library with unknown quality. What can go wrong if you link it into your erlang or java application?


Use Type=notify in  youdaemon.service




After you manage to launch your erlang daemon, you need to collect statistics. We had to add some more linux-related tools to fetch: cpu usage, disk I/O usage, system ram usage (swap, etc), per-interface network statistics, udp errors count, nvidia card usage, etc. 
If this is worthy outsource, I think we can extract it. 





Our os_stat library is linked with our in-erlang pulsedb library. We try to maintain as less dependencies as possible, so we collect all ticks from monitoring tools inside erlang library pulsedb:  https://github.com/pulsedb/pulsedb   (maybe should update public branch)

It can save several thousands metrics with one tick per 1-3 seconds.





Support is an important part of our business, because customers cannot just launch software, they often need help.  We have many people in support staff and I do not want to manage their own public ssh keys on customer's servers. So we have written an ssh proxy: system that login to customer server with one private key and allow support guy to use his own key:  https://github.com/flussonic/ssh-proxy



All these things are rather useless for development and many of them are not required for in-house development, however it is hard to live without them when you sell software.
What is your experience with such things that standard erlang lacks?

Lloyd R. Prentice

unread,
Apr 14, 2019, 1:58:52 PM4/14/19
to Max Lapshin, Erlang-Questions Questions
Thanks, Max, for sharing your experience.

Looking forward to the day, would love to see a series of Erlang deployment/support case studies/tutorials ranging from simple release through distributed clusters.

All the best,

LRP

Sent from my iPad
_______________________________________________
erlang-questions mailing list
erlang-q...@erlang.org
http://erlang.org/mailman/listinfo/erlang-questions

Dieter Schön

unread,
Apr 16, 2019, 3:46:32 PM4/16/19
to Erlang-Questions Questions

Hi,

here is a case from the other end of the spectrum.
I developed a protocol converter for a satellite test system, it is installed on 2 (in words: two)  PCs.

For development, I used one application and rebar3. Production is just "rebar3 as prod tar" (From a labelled git commit).
Testing was done on the two PCs, where I used one as the system under test and the other as test harness/data generator.
I was also using wireshark and other third party tools, to prevent incestuous behaviour.

To test the installation procedure I used docker, which was really nice. I had a blank machine in two seconds, 
where I could load and execute the installation script from the host. I had turnaround cycles from several seconds.

What else.. the application is quite small. It fits into one erlang application. Apart from it, the release only contains the observer and sasl.
Unit testing was done in EUnit.


This was my first project where I used erlang, and the learning curve was quite gentle. 

Kind regards,
Dieter

ll...@writersglen.com

unread,
Apr 16, 2019, 3:54:14 PM4/16/19
to Dieter Schön, Erlang-Questions Questions

Hi Dieter,

 

Many thanks for sharing your experience.

 

Could you please outline your installation procedure?  Also, the size of your Docker application image (RAM footprint) ? I've been considering LXC containers, but I'd like to test with Docker.

 

All the best,

 

Lloyd

Dieter Schön

unread,
Apr 16, 2019, 4:25:00 PM4/16/19
to erlang-q...@erlang.org

Hi Lloyd,

this week I am not in the office, so I can just try to do a brain dump.

The installation was quite simple, I installed a plain Ubuntu desktop,

and then extracted the erlang tar file into /opt. This is done by one or two shell scripts.


The erlang app is started on demand by another application, which is launched by the

operator.

So I do not have to deal with systemd (which I also do not like very much) or the like.

My application could run as a server, but in the current use case there is no need for that.


For the footprint of the docker container I will have to come back next week.

I really have no idea how big it is. In former times I have used VirtualBoxes with snapshots

for similar tasks, but if the focus is quite narrow, a docker container is far more easier and quicker

than a complete VM.


Regards,

Dieter

Lloyd R. Prentice

unread,
Apr 16, 2019, 6:15:38 PM4/16/19
to Dieter Schön, erlang-q...@erlang.org
Hi Dieter,

Really helpful!  

From my poking around, Erlang deployment gets too little attention on the web or in the literature— particularly in this age of cloud deployment, containers, and distributed systems. So thanks again for your nuts-and-bolts overview.

I’d love to read how others have brought Erlang apps and websites into production and lessons learned.

Best wishes, 

Lloyd


Sent from my iPad

Sölvi Páll Ásgeirsson

unread,
Apr 17, 2019, 4:42:18 PM4/17/19
to Lloyd R. Prentice, Dieter Schön, erlang-q...@erlang.org
We do a couple of things:
- package some apps as RPMs for running as metal
- package things as docker containers to be scheduled by ECS

Generally, we just use ‘rebar3 as prod release’ and copy the resulting release into the package.  For RPMs, we use fpm.

I suspect it gets little attention because it’s extremely simple to make a prod ready release

Sent from my iPhone

Lloyd R. Prentice

unread,
Apr 17, 2019, 5:19:25 PM4/17/19
to Gerhard Lazu, Dieter Schön, Erlang Questions
Thanks Gerhard,

Anyone on the team can have a local setup as close to production as possible with a single command: make deploy-docker-stack-local . Yes, that's right: make is the secret sauce that holds everything together. 

Music to my ears!  Since my work is self-funded, minimizing overhead through testing, beta, and start-up phases is very high on my list of essential goals.

This detailed overview gives me greater  insight and confidence into what’s possible.

All the best,

Lloyd

Sent from my iPad

On Apr 17, 2019, at 5:11 AM, Gerhard Lazu <ger...@lazu.co.uk> wrote:

We have been running Phoenix in production since 2016. The app serves a few TBs of traffic every month.

We used to manage the entire infrastructure using Ansible, including app deploys as Docker containers. We used Concourse for the CI, this is what the entire infrastructure pipeline looked like: http://pipeline.gerhard.io/#/57 . The next couple of slides capture some of the tasks that used to run in various CI jobs. A peek into what used to happen on a new git commit to the app repository: http://pipeline.gerhard.io/#/60 . The entire infrastructure codebase is public: thechangelog/infrastructure

We have learned so much from this initial way of doing things that we revamped the entire setup earlier this year. The new setup is captured in The new changelog.com setup for 2019. This is what the current development workflow looks like:

<image.png>

My favourite part from the new setup is how the core config is captured in a Docker stack. This includes monitoring, logging, backups & app updater, as well as the regular 3-tier suspects: proxy, app & db. Because of this approach, the core setup can be reproduced locally, since all IaaS-specific gubbins are isolated in a separate layer. Anyone on the team can have a local setup as close to production as possible with a single command: make deploy-docker-stack-local . Yes, that's right: make is the secret sauce that holds everything together. It's always the simple things that have the potential of making the biggest difference (pun intended):

<image.png>

This is what production looks like right now - make ctop

<image.png>

Cheers, Gerhard.

Pierre Fenoll

unread,
Apr 17, 2019, 5:31:51 PM4/17/19
to Sölvi Páll Ásgeirsson, Dieter Schön, erlang-q...@erlang.org
Piping in since I'm not seeing our deployment procedure mentioned here (I may have missed emails cause Gmail seems to dislike this ML server). 

We push a git tag following semver (a plug-in that'd do something like Helm does: automatic decision on which part of major minor patch to bump would be interesting). 
This triggers GitlabCI jobs that runs tests and builds docker images at the same time (so QA/hitfixes don't have to wait for tests, cause skipping tests isn't easy). 
Docker images are built in stages. The release tarball is unpacked into the base OTP image and weights ~40MB compressed.
It's nice to keep this size small so kubernetes can pull it quick. 
We always keep at least two instances running in prod behind the load balancer so we can deploy progressively. 

This only handles 20 requests second though. 
Tagging is a manual action. 
We don't use relups and in fact I've never worked at a company that did. 
It seems canary deployments are enough most of the time. Easier than thinking of the state changes a relup involves. 
In this perspective I think that the "embedded mode" should be the default mode of development. It'd help catch things earlier and maybe allow for more aggressive optimisations. 
Also we're running elixir. 
--

Cheers,
-- 
Pierre Fenoll

ll...@writersglen.com

unread,
Apr 17, 2019, 6:10:41 PM4/17/19
to Pierre Fenoll, Dieter Schön, erlang-q...@erlang.org

Hi Pierre,

 

It's early days yet, so I'm still shopping for ideas and am much inspired to explore some presented so far on this thread.

 

My system has some 25 services bundled into one big Erlang Nitgrogen app with at least one, if not more, connected through  WAN (out-sourced Discourse).

 

Some of the services are public with high read/low write; others are low read/low write controlled access but require maximum possible data availability and integrity. I'm exploring and intend to test various ways of factoring these services into containers.

 

I'd love to hear more about your experience with Kubernetes. Scares me a bit since my devops skills are so deficient.

 

Many thanks,

 

Lloyd

Reply all
Reply to author
Forward
0 new messages