WSGI application server for Tryton

596 views
Skip to first unread message

Sergio Morillo

unread,
Aug 26, 2016, 12:07:17 PM8/26/16
to tryton-dev
I want to deploy a balanced Tryton 4.0 server for the first time, so I would like to have some feedback about which one choose.
After googling I've noticed people use uWSGI, or Gunicorn, or Gunicorn + Nginx, ....

I would appreciate If someone could advise me (and to the community) about this topic.

Sergio Morillo

unread,
Sep 7, 2016, 1:39:56 PM9/7/16
to tryton-dev
Finally I used uWSGI following the example of previous wsgi tryton patch [1] but using new trytond.application.app [2].

Sergi Almacellas Abellana

unread,
Sep 13, 2016, 4:54:51 AM9/13/16
to tryto...@googlegroups.com
El 07/09/16 a les 19:39, Sergio Morillo ha escrit:
> Finally I used uWSGI following the example of previous wsgi tryton patch
> [1] but using new trytond.application.app [2].
Are you using multiple workers?
I'm a little bit curious about how you manage the tryton cache. Are you
using the standard implementation or using another backend (i.e: Redis)?

Also: how you manage to reload the workers when a new module is
updated/installed. Do you reload the services manually?

--
Sergi Almacellas Abellana
www.koolpi.com
Twitter: @pokoli_srk

Sergio Morillo

unread,
Sep 14, 2016, 10:28:35 AM9/14/16
to tryton-dev


El martes, 13 de septiembre de 2016, 10:54:51 (UTC+2), Sergi Almacellas Abellana escribió:
El 07/09/16 a les 19:39, Sergio Morillo ha escrit:
> Finally I used uWSGI following the example of previous wsgi tryton patch
> [1] but using new trytond.application.app [2].
Are you using multiple workers?
Four workers like example does.

I'm a little bit curious about how you manage the tryton cache. Are you
using the standard implementation or using another backend (i.e: Redis)?
Standard implementation. I'm still newbie in tryton and unfamiliar with tryton cache management problems (if exist). So I would appreciate your comments about it.


Also: how you manage to reload the workers when a new module is
updated/installed. Do you reload the services manually?
Manually with supervisord.

Cédric Krier

unread,
Sep 14, 2016, 11:30:20 AM9/14/16
to tryton-dev
On 2016-09-14 07:28, Sergio Morillo wrote:
> El martes, 13 de septiembre de 2016, 10:54:51 (UTC+2), Sergi Almacellas
> Abellana escribió:
> I'm a little bit curious about how you manage the tryton cache. Are you
> > using the standard implementation or using another backend (i.e: Redis)?
> >
> Standard implementation. I'm still newbie in tryton and unfamiliar with
> tryton cache management problems (if exist). So I would appreciate your
> comments about it.

The cache is per process and there is by default an invalidation
mechanism between processes. So normally, there are no problem.

--
Cédric Krier - B2CK SPRL
Email/Jabber: cedric...@b2ck.com
Tel: +32 472 54 46 59
Website: http://www.b2ck.com/

Ali Kefia

unread,
Sep 19, 2016, 5:00:06 AM9/19/16
to tryton-dev
We faced this problem last year and worked on scaling trytond this way
  • Shared Cache on Redis
  • Load balance with Nginx
  • Sync mechanism via redis PUB/SUB to reload Pool
Description of Coog (trytond spécialized ERP) here: http://coopengo.com/coog-v1-6-nouvelle-architecture/
Implementation on a trytond fork: https://github.com/coopengo/trytond
For more details or help using it, send me an email

Sorry if it is still a custom code, we plan to discuss all this stuff in TUB 2016 and hopefully get part of them in tryton out of the box.

Sergi Almacellas Abellana

unread,
Sep 19, 2016, 7:34:06 AM9/19/16
to tryto...@googlegroups.com
El 19/09/16 a les 10:26, Ali Kefia ha escrit:
> We faced this problem last year and worked on scaling trytond this way
>
> * Shared Cache on Redis
> * Load balance with Nginx
> * Sync mechanism via redis PUB/SUB to reload Pool
>
> Description of Coog (trytond spécialized ERP)
> here: http://coopengo.com/coog-v1-6-nouvelle-architecture/
> Implementation on a trytond fork: https://github.com/coopengo/trytond
> For more details or help using it, send me an email

AFAIU you need the redis chache and the sync mechaninsm because you have
the wsgi servers running in different machines. Am I right?
>
> Sorry if it is still a custom code, we plan to discuss all this stuff in
> TUB 2016 and hopefully get part of them in tryton out of the box.
Will be great if we can discuss on the ML/BT/discuss server so other
peopole not attending at TUB can also join the discussion ;-)

Cédric Krier

unread,
Sep 19, 2016, 8:05:02 AM9/19/16
to tryto...@googlegroups.com
On 2016-09-19 13:34, Sergi Almacellas Abellana wrote:
> El 19/09/16 a les 10:26, Ali Kefia ha escrit:
> > We faced this problem last year and worked on scaling trytond this way
> >
> > * Shared Cache on Redis
> > * Load balance with Nginx
> > * Sync mechanism via redis PUB/SUB to reload Pool
> >
> > Description of Coog (trytond spécialized ERP)
> > here: http://coopengo.com/coog-v1-6-nouvelle-architecture/
> > Implementation on a trytond fork: https://github.com/coopengo/trytond
> > For more details or help using it, send me an email
>
> AFAIU you need the redis chache and the sync mechaninsm because you have the
> wsgi servers running in different machines. Am I right?

You do not need anything for the cache even if it runs on different
machine. But memory for cache will be used per process.

The pool invalidation is not supported but as the module management is
slowly moved to trytond-admin. I think there are no needs to implement
it.

Ali Kefia

unread,
Sep 19, 2016, 8:05:03 AM9/19/16
to tryton-dev


Le lundi 19 septembre 2016 13:34:06 UTC+2, Sergi Almacellas Abellana a écrit :
El 19/09/16 a les 10:26, Ali Kefia ha escrit:
> We faced this problem last year and worked on scaling trytond this way
>
>   * Shared Cache on Redis
>   * Load balance with Nginx
>   * Sync mechanism via redis PUB/SUB to reload Pool
>
> Description of Coog (trytond spécialized ERP)
> here: http://coopengo.com/coog-v1-6-nouvelle-architecture/
> Implementation on a trytond fork: https://github.com/coopengo/trytond
> For more details or help using it, send me an email

AFAIU you need the redis chache and the sync mechaninsm because you have
the wsgi servers running in different machines. Am I right?

Different processes: this work has been done before werkzeug
We tried to investigate ir.cache and did not succeed to ensure propagating invalidation
BTW, with Redis, data is not duplicated on memory (when having 10 workers, it makes sense). The good news is that we noticed no overhead (MSGPACK - REDIS)
 
>
> Sorry if it is still a custom code, we plan to discuss all this stuff in
> TUB 2016 and hopefully get part of them in tryton out of the box.
Will be great if we can discuss on the ML/BT/discuss server so other
peopole not attending at TUB can also join the discussion ;-)

Sure ! I just need to work on a presentation and a small demonstration to get things ready to show (I am making the effort for TUB)
Discussions will continue later with all the community

Cédric Krier

unread,
Sep 19, 2016, 8:35:03 AM9/19/16
to tryton-dev
On 2016-09-19 04:52, Ali Kefia wrote:
> We tried to investigate ir.cache and did not succeed to ensure propagating
> invalidation

What did you miss?

Ali Kefia

unread,
Sep 19, 2016, 10:38:08 AM9/19/16
to tryton-dev


Le lundi 19 septembre 2016 14:05:02 UTC+2, Cédric Krier a écrit :
On 2016-09-19 13:34, Sergi Almacellas Abellana wrote:
> El 19/09/16 a les 10:26, Ali Kefia ha escrit:
> > We faced this problem last year and worked on scaling trytond this way
> >
> >   * Shared Cache on Redis
> >   * Load balance with Nginx
> >   * Sync mechanism via redis PUB/SUB to reload Pool
> >
> > Description of Coog (trytond spécialized ERP)
> > here: http://coopengo.com/coog-v1-6-nouvelle-architecture/
> > Implementation on a trytond fork: https://github.com/coopengo/trytond
> > For more details or help using it, send me an email
>
> AFAIU you need the redis chache and the sync mechaninsm because you have the
> wsgi servers running in different machines. Am I right?

You do not need anything for the cache even if it runs on different
machine. But memory for cache will be used per process.

May be I missed something but it is worth asking a question:
When we have 2 workers, W1 and W2, working on an instance (call it a product), how to manage this situation:
  • W1 added product to cache (from database)
  • W2 modified product and cleared ITS product cache
  • W1 makes get on product cache => hit an invalid version
We can not check db on each cache.get call?
 

The pool invalidation is not supported but as the module management is
slowly moved to trytond-admin. I think there are no needs to implement
it.

I agree, we added it later to avoid loosing a native trytond feature (install module on the fly)

Cédric Krier

unread,
Sep 19, 2016, 11:00:03 AM9/19/16
to tryton-dev
On 2016-09-19 05:32, Ali Kefia wrote:
> May be I missed something but it is worth asking a question:
> When we have 2 workers, W1 and W2, working on an instance (call it a
> product), how to manage this situation:
>
> - W1 added product to cache (from database)
> - W2 modified product and cleared ITS product cache
> - W1 makes get on product cache => hit an invalid version
>
> We can not check db on each cache.get call?

You can not add to Cache Model instance because they are linked to the
transaction.
The Cache objects only store base type and the invalidation is managed
by the developer with the call to clear. It is all the cache that is
cleared so it is quite raw but normally Cache should only be used for
data that almost never change.

Ali Kefia

unread,
Sep 19, 2016, 11:19:04 AM9/19/16
to tryton-dev


Le lundi 19 septembre 2016 17:00:03 UTC+2, Cédric Krier a écrit :
On 2016-09-19 05:32, Ali Kefia wrote:
> May be I missed something but it is worth asking a question:
> When we have 2 workers, W1 and W2, working on an instance (call it a
> product), how to manage this situation:
>
>    - W1 added product to cache (from database)
>    - W2 modified product and cleared ITS product cache
>    - W1 makes get on product cache => hit an invalid version
>
> We can not check db on each cache.get call?

You can not add to Cache Model instance because they are linked to the
transaction.  
The Cache objects only store base type and the invalidation is managed
by the developer with the call to clear.

We do not set instances, we set flat data (msgpack serialization)

It is all the cache that is
cleared so it is quite raw but normally Cache should only be used for
data that almost never change.

cached data get changed rarely, but it could (if not, invalidation mecanism has no reason to exist)
the issue with cache on multi workers is that invalidation does not propagate. And since using db is counter cache principle, we took Redis.
side effect advantages were:
  • less locks on Python code
  • less memory usage (shared memory)
  • faster worker startup (since cache is already up and loaded)

Cédric Krier

unread,
Sep 19, 2016, 11:40:03 AM9/19/16
to tryton-dev
On 2016-09-19 08:19, Ali Kefia wrote:
> the issue with cache on multi workers is that invalidation does not
> propagate.

Could you proof your statement?

> And since using db is counter cache principle, we took Redis.

I do not understand the reasoning.

> side effect advantages were:
>
> - less locks on Python code

On single-thread it should not change anything.

> - less memory usage (shared memory)

agree even but at the cost of network communication.

> - faster worker startup (since cache is already up and loaded)

except if using threaded workers.

Ali Kefia

unread,
Sep 19, 2016, 12:07:31 PM9/19/16
to tryton-dev


Le lundi 19 septembre 2016 17:40:03 UTC+2, Cédric Krier a écrit :
On 2016-09-19 08:19, Ali Kefia wrote:
> the issue with cache on multi workers is that invalidation does not
> propagate.

Could you proof your statement?

Context: multi-process.
I could miss something (that is why I am asking)
  • cache.clear: empty cache and sets ir.cache on database
  • cache.get: reads from memory (does not check ir.cache)
=> No synchronization between workers to invalidate cache horizontally
 

> And since using db is counter cache principle, we took Redis.

I do not understand the reasoning.

Supposed solution:
  • every time we call cache.get, it checks db (ir.cache) for validation
  • db call for every cache.get
=> Makes no sense since we cache data to avoid db calls
 

> side effect advantages were:
>
>    - less locks on Python code

On single-thread it should not change anything. 

Agree that in both cases, we wait for a response (we suppose that Lock on Python has no cost)
I will make a test and send you the result
 

>    - less memory usage (shared memory)

agree even but at the cost of network communication.

>    - faster worker startup (since cache is already up and loaded)

except if using threaded workers.

We have chosen the multi process model (because of GIL) and to be globally more scalable
We made a benchmark on 3.8 and it was much more comfortable / stable with many workers
=> We will give werkzeug a try (if you have a document that helps on configuration, please share)

Cédric Krier

unread,
Sep 19, 2016, 12:40:03 PM9/19/16
to tryton-dev
On 2016-09-19 09:07, Ali Kefia wrote:
>
>
> Le lundi 19 septembre 2016 17:40:03 UTC+2, Cédric Krier a écrit :
> >
> > On 2016-09-19 08:19, Ali Kefia wrote:
> > > the issue with cache on multi workers is that invalidation does not
> > > propagate.
> >
> > Could you proof your statement?
> >
>
> Context: multi-process.
> I could miss something (that is why I am asking)
>
> - cache.clear: empty cache and sets ir.cache on database
> - cache.get: reads from memory (does not check ir.cache)
>
> => No synchronization between workers to invalidate cache horizontally

It is done on each request when starting the transaction:
http://hg.tryton.org/trytond/file/tip/trytond/protocols/dispatcher.py#l167

Sergi Almacellas Abellana

unread,
Sep 20, 2016, 3:19:12 AM9/20/16
to tryto...@googlegroups.com
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4

It's a start, for full reference see the uwsgi documentation.

Hope it helps.

Ali Kefia

unread,
Sep 20, 2016, 3:30:30 AM9/20/16
to tryton-dev


Le lundi 19 septembre 2016 18:40:03 UTC+2, Cédric Krier a écrit :
On 2016-09-19 09:07, Ali Kefia wrote:
>
>
> Le lundi 19 septembre 2016 17:40:03 UTC+2, Cédric Krier a écrit :
> >
> > On 2016-09-19 08:19, Ali Kefia wrote:
> > > the issue with cache on multi workers is that invalidation does not
> > > propagate.
> >
> > Could you proof your statement?
> >
>
> Context: multi-process.
> I could miss something (that is why I am asking)
>
>    - cache.clear: empty cache and sets ir.cache on database
>    - cache.get: reads from memory (does not check ir.cache)
>
> => No synchronization between workers to invalidate cache horizontally

It is done on each request when starting the transaction:
http://hg.tryton.org/trytond/file/tip/trytond/protocols/dispatcher.py#l167

Ok got it ! So we keep the situation of transaction start
Thx for information

Ali Kefia

unread,
Sep 20, 2016, 3:32:32 AM9/20/16
to tryton-dev


Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a écrit :
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4

I wanna make some tests
uwsgi is making load balancing (same job as nginx)? is it made this way?

Korbinian Preisler

unread,
Sep 20, 2016, 3:49:22 AM9/20/16
to tryto...@googlegroups.com
Hi Ali,


On 20.09.2016 09:32, Ali Kefia wrote:


Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a écrit :
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4

I wanna make some tests
uwsgi is making load balancing (same job as nginx)? is it made this way?

You should use uwsgi together with nginx as frontend [1]. We have made very good experiences with this combination.

[1] http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html

But i must say that i only managed to get tryton 4.0 run on uwgi with some custom code. I will test Sergis command. Maybe i missed something.
 

It's a start, for full reference see the uwsgi documentation.

Hope it helps.

--
Sergi Almacellas Abellana
www.koolpi.com
Twitter: @pokoli_srk
--
You received this message because you are subscribed to the Google Groups "tryton-dev" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/tryton-dev/c1dfe583-90d8-4064-822a-e60d57c240d6%40googlegroups.com.


Ali Kefia

unread,
Sep 20, 2016, 3:54:43 AM9/20/16
to tryton-dev


Le mardi 20 septembre 2016 09:49:22 UTC+2, Timitos a écrit :
Hi Ali,

On 20.09.2016 09:32, Ali Kefia wrote:


Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a écrit :
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4

I wanna make some tests
uwsgi is making load balancing (same job as nginx)? is it made this way?

You should use uwsgi together with nginx as frontend [1]. We have made very good experiences with this combination.

[1] http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html

Thx Timitos, I wanna give it a try !

Sergi Almacellas Abellana

unread,
Sep 20, 2016, 4:03:10 AM9/20/16
to tryto...@googlegroups.com
El 20/09/16 a les 09:48, 'Korbinian Preisler' via tryton-dev ha escrit:
> Hi Ali,
>
> On 20.09.2016 09:32, Ali Kefia wrote:
>>
>>
>> Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a
>> écrit :
>>
>> El 19/09/16 a les 18:07, Ali Kefia ha escrit:
>> > => We will give werkzeug a try (if you have a document that
>> helps on
>> > configuration, please share)
>> Using uwsgi you can run trytond with the following command:
>>
>> uwsgi --http :9090 --module trytond.application:app --processes 4
>>
>>
>> I wanna make some tests
>> uwsgi is making load balancing (same job as nginx)? is it made this way?
>
> You should use uwsgi together with nginx as frontend [1]. We have made
> very good experiences with this combination.
>
We also use nginx as frontend. Basically it is used for serving static
files (sao) and we have it configured to pass it all the POST request to
wsgi server (uwsgi in this case but you can use another if you want).
Here you can find the nginx config:

http://pastebin.com/2GaCyuPZ

> [1] http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html
>
> But i must say that i only managed to get tryton 4.0 run on uwgi with
> some custom code. I will test Sergis command. Maybe i missed something.

I'm using it with 4.1 version without any custom code, so I imagine it
also works on version 4.0.

Ali Kefia

unread,
Sep 20, 2016, 4:58:16 AM9/20/16
to tryton-dev


Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a écrit :
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4

This is working,
  • I have some db errors on starting (I mean first request treatments), this could be due to massive requesting => will investigate
  • We will push this to our deployment processes and keep nginx for protocols (http processing), redundancy and multi-servers
  • I guess we will run faster because, now:
    • workers are running werkzeug server which is a test server
    • HTTP protocol management is done twice (nginx and werkzeug layer)
Thx for help Sergi

Sergi Almacellas Abellana

unread,
Sep 20, 2016, 5:16:39 AM9/20/16
to tryto...@googlegroups.com
El 20/09/16 a les 10:58, Ali Kefia ha escrit:
>
>
> Le mardi 20 septembre 2016 09:19:12 UTC+2, Sergi Almacellas Abellana a
> écrit :
>
> El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> > => We will give werkzeug a try (if you have a document that helps on
> > configuration, please share)
> Using uwsgi you can run trytond with the following command:
>
> uwsgi --http :9090 --module trytond.application:app --processes 4
>
>
> This is working,
>
> * I have some db errors on starting (I mean first request treatments),
> this could be due to massive requesting => will investigate
Which kind of errors?
> * We will push this to our deployment processes and keep nginx for
> protocols (http processing), redundancy and multi-servers
> * I guess we will run faster because, now:
> o workers are running werkzeug server which is a test server
> o HTTP protocol management is done twice (nginx and werkzeug layer)
>

Note i'm using http for uwsgi because I'm running trytond and nginx in
diferent users and the tcp protocol simplified the socket permisions but
if you are using the same user and same machine you can use a local unix
socket which it should be a (a little bit) faster ;-)

Ali Kefia

unread,
Sep 20, 2016, 5:22:33 AM9/20/16
to tryton-dev
For me they are on different docker containers => will do it in the same way :)

Korbinian Preisler

unread,
Sep 20, 2016, 6:03:17 AM9/20/16
to tryto...@googlegroups.com
I had the problem that my config files had a custom location. So i had
to create this wsgi.py:

https://gist.github.com/timitos/f74e5c6b5c75064f9c9d2417f23e6cad


Sergi Almacellas Abellana

unread,
Sep 20, 2016, 7:42:53 AM9/20/16
to tryto...@googlegroups.com
El 20/09/16 a les 12:02, 'Korbinian Preisler' via tryton-dev ha escrit:
> I had the problem that my config files had a custom location. So i had
> to create this wsgi.py:
>
> https://gist.github.com/timitos/f74e5c6b5c75064f9c9d2417f23e6cad

For me this can be improved by allowing setting the trytond commandline
options via environment variables and what your script does can be
managed directly by trytond.

I also found this interesting to set the database name via environment
variables so the pool can be initialized before any request is served.

Anyone have missed any other extra argument?

Fabyc

unread,
Dec 6, 2016, 6:37:22 PM12/6/16
to tryton-dev
Hi


On Tuesday, September 20, 2016 at 2:19:12 AM UTC-5, Sergi Almacellas Abellana wrote:
El 19/09/16 a les 18:07, Ali Kefia ha escrit:
> => We will give werkzeug a try (if you have a document that helps on
> configuration, please share)
Using uwsgi you can run trytond with the following command:

uwsgi --http :9090 --module trytond.application:app --processes 4 

Is it right that uwsgi is slow compared to other WSGI servers like is explained on this article [1]? Can anyone to share experience about the other WSGI servers are indicated on the article?

[1]

Thanks
Reply all
Reply to author
Forward
0 new messages