On 05/26, M. Murray wrote:
> FWIW we are investigating the possibility of running multiple instances of
> Tryton on a single server or server farm to serve multiple facilities
> within one geographic region. For larger facilities, they get their own
> physical server with tryton and postgresql installed. However, for smaller
> facilities, and to enable a faster rollout to these facilities, we are
> investigating alternatives to installing physical servers everywhere.
Until about last year our tryton cluster used VMs and AMIs too, but
currently all of our instances are docker containers.
> 1. As seen above, I use the number of party.party records as a kind of
> gauge for the size of the database. Is there another metric that one may
> suggest? Are there any inherent dangers in using this?
This depends from business to business. The load is usually based on the
number of transactions (requests/min ?). The party records may not be
the right representative of how many transactions you will end up with.
> 2. In the case of a Tryton server farm, which ways have members of the
> community used that worked well?
> - One user per instance, all instances in one OS container? or
> - One VM container per instance each vm container has its own
> PostgreSQL + Tryton installation?
> - One VM per instance all talking to a central Postgres
> - Something else?
How we have our tryton server farm configured (we are mostly based on Amazon AWS):
* Each customer gets one or more instances of Tryton docker containers
behind a load balancer. Depending on the environment the load balancer
could be ELB, nginx or HAProxy. For SSL termination you will need ELB
or nginx.
* While spinning up tryton containers and terminating them based on load
is easy, the same thing cannot be done with database. Moving database
between hosts is a pain. We have lesser number of powerful database
servers with streaming replication and fast SSDs to which each of the
instances connect to with separate username and password per customer.
So a central postgres is in essence what we have.
* Elastic search (if you use full text search) is also deployed
centrally like postgres, but that is another can of worms when it
comes to authentication and separating data for different customers.
* Redis (or memcached) are deployed on containers per customer.
> 3. Are there any recommendations on sizing a server to determine how
> many instances can run on a given server? Essentially an instance to server
> ratio.
We have never bothered to size up servers ahead of time because Tryton
scales "out" pretty well. So if there is a higher load we add more
instances and scale down when the load reduces. This works for us
because we are on AWS where the resources are pretty much infinite.
The same thing won't work for databases. Zero downtime database failover
is extremely hard :(
> 4. Apart from trytond, postgres, logs, attachments, json-data, are there
> other things to consider for a sing tryton server instance?
* For logs we use logstash which brings logs to one central elastic
search cluster.
* Attachments (the only other data which needs persistence) is stored on
Amazon S3. We also have plugins out there which use Mongo DB Grid FS
(Use it at your own risk ;-))
* Any documentation or JS applications served from Tryton's built in
http server is baked into the docker container.
* Something that does **not** work in this environment is webdav and
caldav. The proposed change to use wsgidav should fix that.
> 2. I'm thinking that it would be better to just run multiple trytond
> instances on the bare server. No VM separation/containment. I wouldn't even
> want to install trytond multiple times. Each different user, one per
> instance, would all run the globally installed trytond.
Docker containers are our preference because each customer could run
different modules. Gives you sufficient separation from other instances
but does not bring the overhead of VMs.
> 4. Seriously, with the setup I described in my #2, creating a new instance
> would be (1) create the user, (2) unzip the trytond-user-skel.tar.gz, (3)
> update config and run.
Starting a new instance is as simple as:
`docker run openlabs/tryton`
The database connection and other configuration is provided as
environment variables.
All of our architecture is based on the 12 factor [1] app design principles.
At the moment we are moving our cluster to be powered by CoreOS [2] and
using fleet to schedule instances.
[1]
http://12factor.net/
[2]
https://coreos.com/docs/
Thanks & Regards
Sharoon Thomas
Openlabs Technologies & Consulting (P) Limited
w:
http://www.openlabs.co.in
m:
+1 813.793.6736 (OPEN) Extn. 200
t: @sharoonthomas
c:
https://clarity.fm/sharoonthomas (Request a call)
- We Win When our Customers Win