_______________________________________________
erlang-questions mailing list
erlang-q...@erlang.org
http://erlang.org/mailman/listinfo/erlang-questions
Hi Dieter,
Many thanks for sharing your experience.
Could you please outline your installation procedure? Also, the size of your Docker application image (RAM footprint) ? I've been considering LXC containers, but I'd like to test with Docker.
All the best,
Lloyd
Hi Lloyd,
this week I am not in the office, so I can just try to do a brain dump.
The installation was quite simple, I installed a plain Ubuntu desktop,
and then extracted the erlang tar file into /opt. This is done by one or two shell scripts.
The erlang app is started on demand by another application, which
is launched by the
operator.
So I do not have to deal with systemd (which I also do not like very much) or the like.
My application could run as a server, but in the current use case there is no need for that.
For the footprint of the docker container I will have to come back next week.
I really have no idea how big it is. In former times I have used VirtualBoxes with snapshots
for similar tasks, but if the focus is quite narrow, a docker container is far more easier and quicker
than a complete VM.
Regards,
Dieter
Anyone on the team can have a local setup as close to production as possible with a single command: make deploy-docker-stack-local . Yes, that's right: make is the secret sauce that holds everything together.
We have been running Phoenix in production since 2016. The app serves a few TBs of traffic every month.We used to manage the entire infrastructure using Ansible, including app deploys as Docker containers. We used Concourse for the CI, this is what the entire infrastructure pipeline looked like: http://pipeline.gerhard.io/#/57 . The next couple of slides capture some of the tasks that used to run in various CI jobs. A peek into what used to happen on a new git commit to the app repository: http://pipeline.gerhard.io/#/60 . The entire infrastructure codebase is public: thechangelog/infrastructureWe have learned so much from this initial way of doing things that we revamped the entire setup earlier this year. The new setup is captured in The new changelog.com setup for 2019. This is what the current development workflow looks like:<image.png>My favourite part from the new setup is how the core config is captured in a Docker stack. This includes monitoring, logging, backups & app updater, as well as the regular 3-tier suspects: proxy, app & db. Because of this approach, the core setup can be reproduced locally, since all IaaS-specific gubbins are isolated in a separate layer. Anyone on the team can have a local setup as close to production as possible with a single command: make deploy-docker-stack-local . Yes, that's right: make is the secret sauce that holds everything together. It's always the simple things that have the potential of making the biggest difference (pun intended):<image.png>This is what production looks like right now - make ctop<image.png>Cheers, Gerhard.
Hi Pierre,
It's early days yet, so I'm still shopping for ideas and am much inspired to explore some presented so far on this thread.
My system has some 25 services bundled into one big Erlang Nitgrogen app with at least one, if not more, connected through WAN (out-sourced Discourse).
Some of the services are public with high read/low write; others are low read/low write controlled access but require maximum possible data availability and integrity. I'm exploring and intend to test various ways of factoring these services into containers.
I'd love to hear more about your experience with Kubernetes. Scares me a bit since my devops skills are so deficient.
Many thanks,
Lloyd