Production performance lessions uwsgi + nginx

6,169 views
Skip to first unread message

Bruce Wade

unread,
Jul 7, 2012, 10:36:57 AM7/7/12
to web...@googlegroups.com
Thanks to help from the uwsgi group (Specifically Ryan Showalter and Lukasz Mierzwa) I learned about two important settings to help speed up web2py sites.

I hope this advice helps anyone else who is using uwsgi and has a very slow site with traffic starts poring in.

1) Set --cpu-affinity when you are starting uwsgi. I used 3 so 3 processes will be handled by each processor (I set 12 workers in nginx config, 3 Workers * 4 CUPs)
http://lists.unbit.it/pipermail/uwsgi/2011-March/001594.html

2) Use a socket file instead of the TCP stack with uwsgi_pass and when starting uwsgi:
in uwsgi config use: "socket = /var/run/uwsgi.socket"
in nginx config use: "uwsgi_pass  unix:///var/run/uwsgi.socket;"
http://lists.unbit.it/pipermail/uwsgi/2011-September/002625.html

Comments directly from Lukasz:
It's hard to saturate single core with just one worker since it might wait for
external resources like memcached, db or client, 2-4 workers per core seems
like a good starting point, but check how many workers can fit in ram before
you start setting very high max number of workers. Once you're out of memory
and you hit swap all the performance of fast server is gone. If you use cgroup
memory limits in uWSGI than it will start swapping workers memory to disk once
they eat all memory they can.
If you use max-requests or memory limits in uWSGI config than check how long
does it take in peak hours to hit that limit and reload worker, it might
happen to frequently, and if your app takes to long to start it might slow
everything down.
Keep in mind that high number of workers may hit max database connection limit
(it depends on db you use).
As always - if you can benchmark you workers, apache benchmark or siege are
good enough to get some idea of how many request per second you can get.

Also - if you can change the way nginx talks to uwsgi - instead of local tcp
connection use file socket - you want hammer tcp stack with a lot of
connections.

in uwsgi config use: "socket = /var/run/uwsgi.socket"
in nginx config use: "uwsgi_pass  unix:///var/run/uwsgi.socket;"

--
--
Regards,
Bruce Wade
http://ca.linkedin.com/in/brucelwade
http://www.wadecybertech.com

Cliff Kachinske

unread,
Jul 8, 2012, 4:53:38 PM7/8/12
to web...@googlegroups.com
Bruce,

I have learned a lot from reading the threads you start.

Thanks for taking the time.

Cliff Kachinske

Bruno Rocha

unread,
Sep 22, 2012, 10:22:35 AM9/22/12
to web...@googlegroups.com
I did s lot of tests on Linode 1024, and based on Bruce tips I figured out a performatic setup.



I am sute it can be improved in terms of number of processes/cpu-affinity but this need to be specific on each machine.
Reply all
Reply to author
Forward
0 new messages